Sunday 22 June

39 events

Sunday 22 June 14:00 - 15:20 (Room 1.14)

Hands-On: B.04.07 HANDS-ON TRAINING - A Disaster Response Toolbox for Efficient Damage Proxy Map Creation and Analysis

This interactive training session will introduce participants to damage proxy maps (DPMs) used for post-disaster damage assessment. Participants will get hands-on using a new web interface tool which is designed to simplify the process of requesting Damage Proxy Maps (DPMs) and thresholding them for disaster response support.

Attendees will learn how to use the platform to find ready-made DPMs for past events and to submit requests for on-demand DPM generation using synthetic aperture radar (SAR) satellite data. They will then learn how to threshold DPMs, taking into account external observations, contextual information, or their own datasets, to create maps that identify areas which are likely damaged after a disaster event.

By the end of this training session, participants will:

1. Understand the basics of InSAR coherence and its application in disaster response.
2. Be able to use the web interface to find and request DPMs.
3. Learn how to threshold DPMs to identify areas of varying damage likelihood.
4. Gain practical skills in analyzing satellite data for disaster response purposes.

Speakers:


  • Sang-Ho Yun
  • Khai Zher Wee
  • Ricky Winarko
  • Eleanor Ainscoe
  • Emma Hill
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Foyer L3)

Hands-On: A.10.05 HANDS-ON TRAINING - Open-source hyperspectral analysis using hylite and python

Access to huge databases of (hyper)spectral Earth observation data is changing how we analyse the Earth's surface, providing information-rich datasets over wide areas (from satellite and airborne platforms) and at high spatial resolution (using laboratory, tripod or UAV-mounted sensors). However, gaining value from these data requires accessible correction and analysis techniques, many of which are currently only available via expensive, opaque and/or difficult-to-use software.

In this hands-on training, we aim to make (hyper)spectral analysis more accessible by introducing the open-source Python toolbox hylite and its associated GUI interface napari-hippo. We begin with a short overview of widely used Earth observation datasets, including multi- and hyperspectral data. While our training focuses on hyperspectral data – the most data-intensive – we will highlight techniques that can be applied to other optical datasets as well (such as
multispectral).

By the end of the training, we hope that participants will have gained practical experience using hylite for hyperspectral data analysis, from visualisation to analysis and machine learning
applications. In doing so, we hope to boost the impact of Earth observation data across remote sensing sciences – “from Observation to Climate action and sustainability for Earth”.

Detailed Hands-on training Agenda (80 mins)
1. Setup & Introduction to Python + Colab (10 mins)
i. Install required packages on Colab server
ii. Introduction to different hylite objects- quick overview
iii. Navigating and using Google Colab
iv. Comments on installing packages locally (e.g., Anaconda, Jupyter Notebooks)
2. Hyperspectral Data in hylite (15 mins)
i. Loading EO data in formats: .tif, .bsq, .dat using hylite
ii. Visualisation and plotting:
a. RGB composites, band selection (user-defined)
b. Plotting individual spectra from pixels or regions
iii. Basic data cleaning:
a. Bad band removal, handling NaNs, scaling and normalisation
b. Spectral smoothing: Using Savitzky-Golay or similar filters
c. Plotting raw vs cleaned spectra for comparison
3. Working with spectral libraries: (15 mins)
d. Loading existing libraries (e.g., USGS)
e. Creating spectral libraries from hyperspectral data.
4. Spectral Analysis Techniques (20 mins)
(In groups)
i. Band indices
ii. Hull correction – removing spectral continuum
iii. PCA and MNF – noise reduction & feature extraction
iv. Minimum wavelength mapping – for targeted absorption features
v. Spectral abundance maps from the spectral library prepared earlier in the tutorial.
5. Machine learning with hyperspectral data and scikit-learn (15 mins)
Quick demonstration involving supervised mineralogy prediction from tripod-based hyperspectral data.
6. Introduction to Napari (15 mins)
7. Questions and Discussion (5 mins)

Speakers:


  • Rupsa Chakraborty - Helmholtz Institute Freiberg for Resource Technology (HIF)
  • Sam Thiele - Helmholtz Institute Freiberg for Resource Technology (HIF)
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.85/1.86)

Tutorial: D.02.22 TUTORIAL - Geospatial Machine Learning Libraries and the Road to TorchGeo 1.0

{tag_str}

The growth of machine learning frameworks like PyTorch, TensorFlow, and scikit-learn has also sparked the development of a number of geospatial domain libraries. In this talk, we break down popular geospatial machine learning libraries, including:



TorchGeo (PyTorch)
Eo-learn (scikit-learn)
Raster Vision (PyTorch, TensorFlow*)
DeepForest (PyTorch, TensorFlow*)
Samgeo (PyTorch)
TerraTorch (PyTorch)
SITS (R Torch)
Srai (PyTorch)
Scikit-eo (scikit-learn, TensorFlow)
Geo-bench (PyTorch)
GeoAI (PyTorch)
OTBTF (TensorFlow)
GeoDeep (ONNX)


For each library, we compare the features they have as well as various GitHub and download metrics that emphasize the relative popularity and growth of each library. In particular, we promote metrics including the number of contributors, forks, and test coverage as useful for gauging the long-term health of each software community. Among these libraries, TorchGeo stands out with more builtin data loaders and pre-trained model weights than all other libraries combined. TorchGeo also boasts the highest number of contributors, forks, stars, and test coverage. We highlight particularly desirable features of these libraries, including a command-line or graphical user interface, the ability to automatically reproject and resample geospatial data, support for the spatio-temporal asset catalog (STAC), and time series support. The results of this literature review are regularly updated with input from the developers of each software library and can be found here: https://torchgeo.readthedocs.io/en/stable/user/alternatives.html



Among the above highly desirable features, the one TorchGeo would most benefit from adding is better time series support. Geotemporal data (time series data that is coupled with geospatial information) is a growing trend in Earth Observation, and is crucial for a number of important applications, including weather and climate forecasting, air quality monitoring, crop yield prediction, and natural disaster response. However, TorchGeo has only partial support for geotemporal data, and lacks the data loaders or models to make effective use of geotemporal metadata. In this talk, we highlight steps TorchGeo is taking to revolutionize how geospatial machine learning libraries handle spatiotemporal information. In addition to the preprocessing transforms, time series models, and change detection trainers required for this effort, there is also the need to replace TorchGeo's R-tree spatiotemporal backend. We present a literature review of several promising geospatial metadata indexing solutions and data cubes, including:



R-tree
Shapely
Geopandas
STAC
Numpy
PyTorch
Pandas
Xarray
Geopandas
Datacube


For each spatiotemporal backend, we compare the array, list, set, and database features available. We also compare performance benchmarks on scaling experiments for common operations. TorchGeo requires support for geospatial and geotemporal indexing, slicing, and iteration. The library with the best spatiotemporal support will be chosen to replace R-tree in the coming TorchGeo 1.0 release, marking a large change in the TorchGeo API as well as a promise of future stability and backwards compatibility for one of the most popular geospatial machine learning libraries. TorchGeo development is led by the Technical University of Munich, with incubation by the AI for Good Research Lab at Microsoft, and contributions from 100 contributors from around the world. TorchGeo is also a member of the OSGeo foundation, and is widely used throughout academia, industry, and government laboratories. Check out TorchGeo here: https://www.osgeo.org/projects/torchgeo/

Speakers:


  • Adam J. Stewart - TUM
  • Nils Lehmann - TUM
  • Burak Ekim - UniBw
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Hall L3)

Tutorial: F.02.20 TUTORIAL - CEOS COAST Demo: Novel coastal satellite data products

We offer a tutorial session of the new and novel coastal satellite data products publicly available through the international interagency collaborative engagement within the CEOS COAST virtual Constellation. As a new virtual constellation within the Committee on Earth Observation Satellites, the Coastal Observations, Applications, Services, and Tools team work together across agencies to co-design new demanded coastal products and to test them in 6 pilot regions around the world, then later release of quality global coastal products. In addition to new product co-design with stakeholders, COAST also works to improve the quality of existing satellite data in the coastal areas. All products are freely and publicly available via the COAST Application Knowledge Hub, which provides access to a wealth of other publicly accessible data and information on coastal regions which may be of interest to coastal users. Navigating the Application Knowledge Hub will be a component of the tutorial. Examples of available products we plan to include in the tutorial are shoreline mapping and shallow-water satellited derived bathymetry, coastal water quality and chlorophyll a/sediments, mangrove and marsh mapping.

Moderators:


  • Aurelien Carbonniere - CNES - CEOS COAST-VC
  • SeungHyun Son - University of Maryland

Speakers:


River Discharge and Sea Level Satellite Products for Coastal Hazards


  • Jérôme Benveniste - COSPAR

Satellite-derived products for monitoring coastlines and dynamic intertidal regions


  • Stephen Sagar - Geoscience Australia

Ocean Color Remote Sensing using EOS06 OCM3 sensor: Science products along the coastal waters


  • Moosa Ali - ISRO

CEOS COAST’s Application Knowledge Hub & satellite-derived Water Quality


  • SeungHyun Son - University of Maryland

Mapping the impact of catchment rainfall-runoff extremes along coastal waters


  • Kesav Unnithan
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.15/1.16)

Hands-On: C.03.20 HANDS-ON TRAINING - ACOLITE processing of Sentinel-2/MSI and Sentinel-3/OLCI data

This training session will cover the processing of Sentinel-2/MSI and Sentinel-3/OLCI data with the open source ACOLITE processor, with a focus on coastal and inland water applications. The DSF and RAdCor algorithms and their assumptions and typical applications will be briefly discussed. The main goal of this session is hands on processing, including image subsetting to a region of interest, and the use of the different atmospheric correction algorithms. ACOLITE output products will be explored using the SNAP toolbox, and common processing settings and issues will be demonstrated. For advanced users ACOLITE configuration, processing, and output analysis from within Python will be covered.

Participants will be expected to bring a laptop, the processing software and example data will be provided. Participants are encouraged to acquire cloud-free TOA data for their areas of interest (L1C for MSI, and OL_1_EFR for OLCI). In the interest of time, participants are encouraged to download the latest ACOLITE release from https://github.com/acolite/acolite/releases/latest, and the SNAP toolbox for output visualisation from https://step.esa.int/main/download/snap-download/ before the start of the session. Advanced Python users are expected to set up an appropriate conda environment and a git clone of the ACOLITE code base.

Speakers:


  • Quinten Vanhellemont
  • Dimitry Van der Zande
  • Arthur Coqué

Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.94/0.95)

Tutorial: D.03.18 TUTORIAL - From Earth Science to Storytelling with EO Dashboard

EO Dashboard project has been developed by NASA, ESA (the European Space Agency), and JAXA (Japan Aerospace Exploration Agency). The three agencies and have combined their resources, technical knowledge, and expertise to produce the Earth Observing (EO) Dashboard (https://eodashboard.org), which aims to strengthen our global understanding of the changing environment with human activity. The EO Dashboard presents EO data from a variety of EO satellites, served by various ESA, NASA, and JAXA services and APIs in a simple and interactive manner. Additionally, by means of scientific storytelling, it brings the underlying science closer to the public providing further insights into the data and their scientific applications. In this tutorial participants use the EO Dashboard and its resources to do Open Earth Observation Science. Through very practical exercises hosted on the EOxHub JupyterLab Workspace in the Euro Data Cube environment, the participants will access open data sources of EO Dashboard, will use the open APIs and services to work with this data and will create workflows to extract insights from the data. Finally, they will learn how to create and publish stories, data and code on EO Dashboard following FAIR and Open Science principles. Free access to the Euro Data Cube resources will be ensured via the ESA Network of Resources sponsoring mechanism.

The tutorial is self contained and participants will be provided with the necessary information and support to run all exercises. A basic level of knowledge is expected in the following fields:
- Earth Observation
- EO image analysis
- Statistics
- Python

To ensure suitable support on site and interactions between participants, this tutorial is best suited for an audience of 15-30 participants.
Exercises will be run individually, however participants can collaborate on the storytelling part of the tutorial.
The team is formed of ESA, NASA and JAXA experts. The presenters are highly experienced in the delivery of hands-on tutorials and has already delivered previous editions of hands-on sessions with EO Dashboard at IGARSS 2021-2024 and FOSS4G 2022, 2023, 2024.

Speakers:


  • Lubomír Doležal- EOX
  • Diego Moglioni - Stario c/o ESA
  • Sara Aparício - Solenix c/o ESA
  • Anca Anghelea - ESA
  • Daniel Santillan - ESA

Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.31/1.32)

Tutorial: D.02.21 TUTORIAL - Easy and Efficient Fine-tuning of Foundation Models for Earth Observation

Foundation models for Earth Observation (EO)—large-scale AI systems pretrained on vast, diverse datasets—are revolutionizing geospatial analysis by enabling universal applications. These models learn rich, scalable representations of satellite imagery (multispectral, radar, SAR), offering unprecedented potential to process petabytes of EO data. Yet, widespread adoption faces two critical barriers:
- Prohibitive Costs: Training foundation models from scratch demands immense computational resources, specialized infrastructure, and technical expertise, excluding many academic and humanitarian organizations.
- Domain Adaptation Gap: Pretrained models often fail to generalize to downstream EO tasks—such as land cover mapping, urban and forest monitoring, and extreme event monitoring—without domain-specific recalibration.
This tutorial begins with an introduction to foundation models, detailing their architectures, pretraining strategies, and relevance to EO workflows. Following this introduction, the session bridges the adoption gap by presenting an end-to-end pipeline to benchmark, adapt, and deploy foundation models for EO, with a focus on:
- Benchmarking Foundation Models: A toolbox for efficient data engineering, automating rapid streaming of large EO datasets (e.g., Sentinel-1, Landsat) into GPU-ready batches while minimizing preprocessing efforts.
- Plug-and-Play Adaptation: Practical implementation of PEFT (Parameter-Efficient Fine-Tuning) of foundation models for diverse EO tasks, including LoRA (Low-Rank Adaptation) and ViT Adapters, enabling easy and efficient adaptation of foundation models with minimal computational overhead.
We demonstrate the usability of the pipeline on ExEBench, a benchmark dataset spanning seven categories of extreme events on Earth. By analyzing model performance on ExEBench, we gain insights into how foundation models can generalize across diverse data types (e.g., remote sensing, weather, and climate data) and the impact of different finetuning strategies on model performance.
By simplifying adaptation and providing tools for benchmarking foundation models, this pipeline empowers researchers to prioritize domain-specific innovation—not engineering hurdles—accelerating solutions for sustainability and climate resilience.

Speakers:


  • Sining Chen - Technical University of Munich
  • Shan Zhao - Technical University of Munich
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.96/0.97)

Hands-On: A.02.14 HANDS-ON TRAINING - NaturaSat software in identification, classification and monitoring of habitats by using Sentinel and UAV data

NaturaSat is an innovative software solution designed to seamlessly identify and monitor protected habitats like the Natura 2000 sites. With its user-friendly interface, NaturaSat empowers the usage of satellite and UAV data to enhance biodiversity protection, ensure sustainable land use, and optimise environmental management.
The software has been successfully used to segment diverse structures from satellite and UAV data precisely. The time-monitoring of segmented habitats is possible thanks to NaturaSat's ability to visualise and analyse various bands and indices of satellite and UAV data and extract their statistical characteristics. The NaturaSat supervised deep learning model is used for spatial monitoring and allows accurate classification up to the habitat level. The NaturaSat Historical Map Transformation tool will enable users to transform desired areas from historical maps to contemporary ones, intending to revitalise the historical natural sites.
The main goal of this NaturaSat hands-on training is to guide participants through the solution of one complex use case. Specifically, we will be identifying, classifying, and monitoring wetlands with the help of satellite and UAV data and the freely available NaturaSat software tools. The NaturaSat software can be downloaded from http://www.algoritmysk.eu/en/naturasat_en/.

Session Instructors:


  • Aneta A. Ožvat
  • Maria Sibikova
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.49/0.50)

Tutorial: D.03.12 TUTORIAL - Deep dive into vector data cubes for Python and R

Data cubes, as structures for representing multi-dimensional data, are typically used for raster data. When we deal with vector data, thinking of points, polygons and the like, we tend to represent it as tabular data. However, when the nature of the vector dataset is multi-dimensional, this is not sufficient. This tutorial will provide a deep dive into the concept of vector data cubes, an extension of the generic data cube model to support vector geometries either as coordinates along one or more dimensions or as variables within the cube itself. You will learn how to create such objects in Python using `Xarray` and `Xvec` and in R using `stars` and `post`. The tutorial is organised along a series of applied use cases that show different applications of vector data cubes. Starting with the one resulting from zonal statistics applied to a multidimensional raster cube, we will cover spatial subsetting, plotting, CRS transformations, constructive operations and I/O. Using the cube that captures origin-destination data, we will explore the application of multiple dimensions supported by vector geometry. Finally, the use case of time-evolving geometry will demonstrate the vector data cube composed of geometries both as coordinates along the dimensions and as variables in the cube. This includes an introduction to the concept of summary geometry supported by spatial masking and the more complex case of zonal statistics based on time-evolving geometry.

The tutorial will be given simultaneously in Python and R by the authors of the aforementioned software packages. Participants are free to choose their preferred language.

Speakers:


  • Martin Fleischmann - Charles University, CZ
  • Lorena Abad - University of Salzburg, AT
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.11/0.12)

Tutorial: D.04.13 TUTORIAL - Accelerating insights: New Google Earth Engine data products and capabilities for sustainability applications

Learn about Google Earth Engine's latest data products and capabilities, designed to streamline earth observation data analysis for sustainability, conservation, and business applications.

• We’ll walk through interactive demos, such as sustainable sourcing with new forest data partnership commodity datasets and methane emissions monitoring and reduction.

• We'll highlight an innovative new Earth Engine dataset that leverages deep learning from multi-sensor, multi-temporal inputs to enhance efficiency and accuracy of classification and regression analyses.

• We'll demonstrate new integrations between Earth Engine and BigQuery to streamline data management and analytics, making it easier for data scientists to leverage earth observation data and insights.

The session will be a combination of short presentations and live demos to provide context and practical guidance. All data, notebooks, and apps will be made available to you to work with at your convenience. Join us to learn how Google's environmental geospatial tools can help you move more quickly out of the data processing and management phase and onto the task of deriving insights.

Speakers:


  • Valerie Pasquarella - Google
  • Nicholas Clinton - Google
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.61/1.62)

Tutorial: D.01.11 TUTORIAL - From basic to advanced: build and run your first workflows with DeltaTwin and SesamEO services

This tutorial introduces the DeltaTwin and SesamEO, a suite of services designed to contribute to Digital Twins by running workflows and generating valuable results.

DeltaTwin provides a collaborative workspace for accessing datasets, configuring and running workflows, monitoring their execution and sharing results. While SesamEO provides a direct access to explore and retrieve data from different providers including DestinE Data Lake (DEDL), Copernicus Data Space Ecosystem (CDSE) and Eurostat.

The first part explores the web user interface of both services. Attendees will discover the main DeltaTwin functionnalities, such as how to browse existing DeltaTwin components, run them and save the generated results as artefacts in their personal gallery. Then, the presentation highlights how to interface with SesamEO service, to browse collections, select products and finally use one as input of a DeltaTwin component.
(~15 minutes)

The second part is more developers oriented and explains how to create DeltaTwin components using our Python command-line interface (CLI).

To build a DeltaTwin, bring your own code or model, and edit your workflow.
Then publish it to the service and run it.
Attendees will learn the configuration process. Several workflow creations will be shown starting from a basic single-step process to more complex scenario.

An example, on how to integrate a machine learning model to your workflow will be also presented. These examples will be mainly based on processing Sentinel-2 products retrieved from SesamEO.
(~55 minutes)

Questions & Answers
(~10 minutes)

Speakers:


  • Christophe Demange - Gael Sytems
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 1.34)

Hands-On: D.06.06 HANDS-ON TRAINING – Space HPC , ESA’s High Performance Computing facility: how to use it for EO processing

This hands-on training will introduce you to SpaceHPC, ESA’s High Performance Computing facility. It will be an opportunity to use cutting-edge technology (CPU & GPU) and to get familiar with how to interact with an HPC system. By the end of this session, you’ll know more about HPC and in particular parallel computing, Deep Learning.

Introduction on the SpaceHPC [10 minutes presentation]
-General intro of HPCs + transition to explain why SpaceHPC
-For whom?
-How to access?
-What computing power is available?
-Explanation of the scheduler and the jobs

Connecting to the SpaceHPC and running a job [10 minutes hands-on]
-List available computing partitions
-Check the current level of utilization of the HPC and the free resources
-Run a 1st job: something easy and check the output of the job

CPU usecase [40 minutes]
Running a heavy satellite image processing (pan-sharpening, textures, features extraction, classification…)

First, all the participants would all run the same processing:
1)Processing on one single CPU core
Then, each participant will be tasked to run the processing for a specific computing power
2)Processing on X CPU cores, with X varying depending on the participant
3)Measure the speedup of 2) vs 1)

Finally, we would compile the results of all participants in an interactive process: Each participant would report the measured speedup of their X number of CPU cores and we would add it to a shared chart.

GPU usecase [30 minutes]
For the GPU processing, we’ll present some Deep Learning technics, based on Pytorch. The usecase the participants will work on is a neural network (CNN or transformer)

We’ll have a deeper look at some topics related to Deep Learning:
-Making the best use of the capabilities of the SpaceHPC by optimizing some parameters (batch size, learning rate, data I/O…)
-Best practices: saving (checkpointing) the state of the training after each epoch…
-Live visualization of the accuracy during the training (e.g. tensorboard)
-Trying several neural network architectures

We’ll also check some aspects more linked to the HPC:
-Difference between nodes with 8 GPUs and the nodes with 4 GPUs
-Difference between different libraries

Speakers:


  • Nicolas Narcon - ESA
  • Sean Quin - ESA
  • Neva Besker - ESA
  • Sean Quin - ESA
Add to Google Calendar

Sunday 22 June 14:00 - 15:20 (Room 0.14)

Hands-On: C.06.13 HANDS-ON TRAINING - SAR Calibration Toolbox (SCT) an open-source multi mission tool for quality analysis of SAR data

The SAR Calibration Toolbox (SCT) has been developed in the framework of the ESA funded EDAP+ and SAR MPC projects, with the goal of making it an open-source tool for multi-mission and general-purpose SAR data quality assessment.
The tool was released to the public in July 2024 and is available at the following GitHub page:
https://github.com/aresys-srl/sct
The git repository includes full on-line documentation guiding the users through the installation and the usage of SCT.
The SAR mission L1 products currently supported include Sentinel-1, ICEYE, SAOCOM, NovaSAR and EOS-04. Support to heritage ERS and ASAR missions is currently under development and new missions will be supported in the future.
The SCT tool implements the following analyses:
• Point targets analysis: geometric resolution, IRF parameters assessment, absolute radiometric calibration and geolocation accuracy
• Distributed target analysis: extraction of radiometric profiles for the assessment of relative calibration over Rain Forest and of thermal noise level (NESZ) over low back-scatter areas
• Interferometric data analysis: assessment of the interferometric coherence from an input interferometric product or from 2 co-registered SLC products
The session will provide an overview of the features and capabilities of the tool. During the hands-on training, the users will be able to install SCT on their devices and to perform sample quality analyses of L1 SAR products to showcase the main functionalities of SCT.

Speaker:


  • Giorgio Parma - aresys
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.14)

Hands-On: C.01.23 HANDS-ON TRAINING - OGC API - DGGS and Free & Open-Source Software Discrete Global Grid Abstraction Library (DGGAL)

Hands-on experience assembling a visualization client for OGC API - DGGS (https://docs.ogc.org/DRAFTS/21-038.html) using the new Free & Open-Source Software Discrete Global Grid Abstraction Library (DGGAL: https://dggal.org).

The session will cover:
- A short introduction to OGC APIs
- Retrieving DGGS-quantized raster data in DGGS-JSON and GeoTIFF using OGC API - DGGS
- Retrieving DGGS-quantized vector data in DGGS-JSON-FG and GeoJSON using OGC API - DGGS
- Performing zone queries using OGC API - DGGS
- Resolving global zone identifiers to zone geometry using DGGAL
- Resolving local sub-zone indices within a parent zone to global zone identifiers / geometry using DGGAL
- Identifying zones within view with DGGAL
- Visualizing DGGS-JSON data retrieved from OGC API - DGGS with help from DGGAL

DGGAL will support at minimum the three DGGRS listed in Annex B of OGC API - DGGS:
- GNOSIS Global Grid: a rectangular WGS84 latitude, longitude grid hierarchy corresponding to an OGC 2D Tile Matrix Set (https://docs.ogc.org/is/17-083r4/17-083r4.html#toc58) utilizing variable widths to limit area variance in polar regions
- ISEA9R: An equal-area rhombic grid hierarchy in the Icosahedral Snyder Equal Area (ISEA) projection with a refinement ratio of 9, axis-aligned in a rotated, sheared and scaled Cartesian space
- ISEA3H: An equal-area hexagonal grid hierarchy in the Icosahedral Snyder Equal Area (ISEA) projection with a refinement ratio of 3, where 12 zones at any refimement level appear as "pentagons" due to interruptions in the projection

Support for additional DGGRSs will be added to DGGAL over time.

Participants are expected to bring their own laptop to follow with the exercises.

One or more demonstration end-points will be provided with sample datasets.

DGGAL is a library written in the eC programming language (https://ec-lang.org/) with bindings available for C, C++ and Python.
The exercises and demonstration will be using the Python bindings.

Speakers:


  • Jérôme Jacovella-St-Louis - Ecere Corporation
  • Dr Samantha Lavender - Pixalytics

Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.14)

Hands-On: F.01.13 HANDS-ON TRAINING - Crafting Interactive Stories with DEA: From Data to Narrative

This hands-on training will guide participants through the creation of impactful stories using DEA, a Data Visualization and Interactive Storytelling Service. DEA supports users in presenting environmental and climate information in a clear and engaging way.

The session will introduce DEA, its role within Earth Observation and Climate Change initiatives, and the portfolio of ready-to-use datasets it offers, including climate projections, reanalysis datasets like ERA5, and real-time forecasts data from ECMWF and Copernicus services.

Participants will also learn how to enrich their stories by adding external content from researches, local studies, or custom analyses. The presenter will demonstrate how to prepare and process external content — such as maps, charts, or videos generated from satellite-based data or socioeconomic statistics — using complementary tools and how to integrate these elements into DEA to enhance the final narrative.

The core of the training will focus on hands-on story creation, where participants will practice stepby-step:
• Selecting relevant datasets and defining the narrative angle.
• Creating maps, graphs, and media-rich visualisations.
• Combining data-driven content with textual annotations.

By the end of the session, attendees will understand how DEA can be used to present scientific results, support decision-making processes, show effects of extreme events and explain environmental challenges at different scales, from global strategies to local adaptation plans. They will leave with practical skills to use the DEA service and turn their own data and findings into compelling, interactive stories.

Speakers:


  • Arturo Montieri
  • Cristina Arcari
  • Monica Rossetti
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.85/1.86)

Tutorial: D.03.13 TUTORIAL - Code Once, Reuse and Scale Seamlessly: Build EO Workflows using openEO in the Copernicus Data Space Ecosystem

In an era of unprecedented availability of Earth Observation (EO) data, the Copernicus Data Space Ecosystem (CDSE) plays a key role in bridging the gap between data accessibility and actionable insights. Despite the availability of freely accessible satellite data, widespread adoption of EO applications remains limited due to challenges in extracting meaningful information. Many EO-based projects struggle with non-repeatable, non-reusable workflows, mainly due to the lack of standardised, scalable solutions.

CDSE tackles these barriers by adopting common standards and patterns, most notably through openEO. This open-source solution is a community-driven standard that simplifies remote sensing data access, processing, and analysis by offering a unified platform. It empowers developers, researchers, and data scientists to use cloud-based resources and distributed computing environments to tackle complex geospatial challenges. Adhering to the FAIR principles (Findable, Accessible, Interoperable, and Reusable), it supports the global sharing and reuse of algorithms, enhancing collaboration and scalability.

Furthermore, by promoting the development of reusable, scalable, and shareable workflows, openEO enhances the efficiency and reproducibility of the EO workflow. Its feature-rich capabilities have also been used and validated in large-scale operational projects such as ESA WorldCereal and the JRC Copernicus Global Land Cover and Tropical Forestry Mapping and Monitoring Service (LCFM), which rely on its robust and reliable infrastructure.

Join us for a detailed tutorial to explore openEO's capabilities for developing scalable and reusable workflows within the Copernicus Data Space Ecosystem. Participants will learn how to design a reusable algorithm that is scalable for varying remote sensing applications. These algorithms can be shared among the EO communities as user-defined processes (UDPs) or openEO services in the platform offered by the ecosystem.

Speakers:


  • Pratichhya Sharma - VITO
  • Victor Verhaert - VITO
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.31/1.32)

Tutorial: D.02.18 TUTORIAL - Mastering EOTDL: A Tutorial on crafting Training Datasets and developing Machine Learning Models

{tag_str}

In this tutorial session, participants will dive deep into the world of machine learning in Earth observation. Designed for both beginners and seasoned practitioners, this tutorial will guide you through the comprehensive workflow of using the Earth Observation Training Data Lab (EOTDL) to build, manage, and deploy high-quality training datasets and machine learning models.

Throughout the session, you will begin with an introduction to the fundamentals of EOTDL, exploring its datasets, models, and the different accesibility layers. We will then move into a detailed walkthrough of EOTDL’s capabilities, where you’ll learn how to efficiently ingest raw satellite data and transform it into structured, usable datasets. Emphasis will be placed on practical techniques for data curation, including the utilization of STAC metadata standards, ensuring your datasets are both discoverable and interoperable.

Next, the session will focus on model development, showcasing the process of training and validating machine learning models using curated datasets, including feature engineering. Real-world examples and case studies will be presented to illustrate how EOTDL can be leveraged to solve complex problems in fields such as environmental monitoring, urban planning, and disaster management.

By the end of the tutorial, you will have gained valuable insights into the complete data pipeline—from dataset creation to model deployment—and the skills necessary to apply these techniques in your own projects. Join us to unlock the potential of Earth observation data and drive innovation in your machine learning endeavors.

Speakers:


  • Juan B. Pedro Costa - CTO@Earthpulse, Technical Lead of EOTDL
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.49/0.50)

Tutorial: A.08.16 TUTORIAL - ALTICAP: a global satellite altimetry sea level product for coastal applications

The ALTimetry Innovative Coastal Approach Product (ALTICAP) is a new satellite altimetry sea level product dedicated to coastal and regional applications. It currently contains five years of collocated high-resolution (20 Hz) Jason-3 altimeter along-track coastal variables (sea level anomaly, significant wave height and wind) in a global strip from 0 to 500 km from the coast. This experimental product builds on the most recent algorithms to ensure sea level reliability in the coastal strip and beyond, with the objective to be extended in the past (Jason-2) and up to recent years (Sentinel-6). It has been developed with the goal of providing simple and easy-to-use files with a resolution consistent with the observable physical signals. It is developed by the coastal altimetry team from LEGOS/CTOH, CLS, Noveltis and CNES, and distributed on the AVISO+ website (Doi : 10.24400/527896/a01-2023.020 ).

During this tutorial session, we will briefly introduce the challenges of coastal satellite altimetry and we will present ALTICAP as well as some tools to handle the data, based on Jupyter notebooks in Python. Attendants will learn how to read the two different ALTICAP dataset formats and how to plot the data for various types of figures (maps, time series, Hovmuller diagrams…). The training will also explore practical applications such as the computation of geostrophic currents and the comparison with other types of observations.

Speakers:


  • Mathilde CANCET - CNRS/LEGOS * Léna TOLU - CNRS/LEGOS * Alexandre HOMERIN - NOVELTIS * Fabien LEGER - CNRS/LEGOS
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.61/1.62)

Tutorial: C.02.21 TUTORIAL - Visualization and analysis of Imaging Spectroscopy data with the EnMAP-Box

The EnMAP-Box is a software designed to facilitate the visualization, processing and analysis of imaging spectroscopy data. Developed specifically to support the German Environmental Mapping and Analysis Program (EnMAP), the EnMAP-Box meanwhile offers a unique variety of ways to efficiently display and analyze remote sensing data from hyper- and multispectral missions, as EnMAP, EMIT, PRISMA, CHIME, or the Sentinel-2 and Landsat missions.

Developed as Python plugin for the QGIS geoinformation system, the EnMAP-Box integrates into a well-established, platform-independent, and free-and-open-source software ecosystem, that can be easily integrated into existing workflows.

In our tutorial, we will guide you through the essential functionalities of the EnMAP-Box, present its latest features and give an outlook on further developments.

Topics:
- Installation
- Introduction to EnMAP-Box GUI
- Raster import and metadata handling
- Visualization of hyper- and multispectral raster data, spatial and spectral linking
- Presentation of specific renderers for an optimized visualization of raster data and raster analysis results, e.g., to visualize class-fractions and probability layers
- How to run EnMAP-Box processing algorithms from QGIS, Python or CLI; how to create and run processing workflows using the QGIS Model Builder
- Spectral libraries: import spectral profiles from field campaigns; label spectral profiles with arbitrary attributes; collect image endmembers; modify profiles in QGIS field calculator
- SpecDeepMap: a deep learning-based semantic segmentation application; overview of functionalities and algorithms; how to finetune a pre-trained ResNet18 backbone on Sentinel-2 TOA imagery, utilizing European Union Cropmap labels; how to use a finetuned model to generate continuous mapping predictions

At the request of the participants, selected topics can be discussed more in detail. Questions and requests canbe sent in advance to enmapbox@enmap.org.

Docs: https://enmap-box.readthedocs.io
Code: https://github.com/EnMAP-Box/enmap-box
Publication: Jakimow, Janz, Thiel, Okujeni, Hostert, van der Linden, 2023, EnMAP-Box: Imaging spectroscopy in QGIS, SoftwareX, vol. 23, doi: 10.1016/j.softx.2023.101507.

Speakers:


  • Benjamin Jakimow - Humboldt-Universität zu Berlin, Geography Department
  • Andreas Janz - Humboldt-Universität zu Berlin, Geography Department
  • Leon-Friedrich Thomas - University of Helsinki, Department of Agricultural Sciences
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.11/0.12)

Tutorial: D.02.20 TUTORIAL - EVE: A Comprehensive Suite of LLMs and Data for Earth Observation and Earth Sciences

LLMs have proven to be effective as general purpose tools to aid on a variety of tasks. However, when targeting specific domains, LLMs trained on general domain data require domain-specific knowledge to achieve state-of-the-art results, particularly on technical and scientific disciplines like Earth Observation (EO). This can be achieved by developing domain-specific LLMs, including training from scratch with vast amounts of domain-specific data [1, 2], or instruction
fine-tuning general domain LLMs [3, 4, 5].
Inspired by this trend, we develop Earth Virtual Expert (EVE), a suite of LLM, relevant training, benchmarking data, and strategies, for Earth Observation and related Earth Sciences. EVE is created by further pre-training an open-source general domain LLMs on billions of tokens of curated high quality scientific EO data sources. We then fine-tune instructed models with our own created datasets and authentic preference data. Finally, we integrate the chat models with
an external curated database for Retrieval Augmented Generation (RAG).
EVE, the resulting model, is designed to become a helpful assistant within EO, and can cater to a wide audience of users, both scientific specialists as well as the general public interested in any discipline related to EO. The target use cases include support for bibliographic exploration, assisting and informing policy decision-making, and enhancing EO approachability to non-specialized public.

Our contributions include:
1. Domain-Specific Models: domain-adapted models, pre-trained on billions of EO-specific tokens and fine-tuned for chat instruction-based interaction in EO and related Earth Sciences.
2. Benchmarking datasets: for EO instruction adherence, alignment, evaluating model performance on hallucination mitigation, enabling robust validation and iteration.
3. Training Data:
i. A curated corpus containing billions of cleaned and processed tokens specific to EO.
ii. Instruction datasets designed for fine-tuning models on EO downstream tasks.
iii. Authentic preference/alignment data.
4. Retrieval-Augmented Generation (RAG) System: A curated RAG database of EO-relevant documents, integrated with the chat models to facilitate accurate and contextually grounded responses.
5. Hallucination Mitigation Strategy: A fact-checking method to suppress factual errors generated by the RAG system.
6. Open-Source Codebase: The supporting code for data processing, model finetuning, and deployment of the RAG system, to ensure reproducibility and usability.

The adapted and instructed models, corresponding datasets and benchmarks will be released as open source contributions to the Earth Observation and Earth Sciences through public repositories.

Speakers:


  • Antonio Lopez - Pi School
  • Marcello Politi - Pi School
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.34)

Hands-On: D.03.09 HANDS-ON TRAINING - The CoMet toolkit – Uncertainties made easy

The CoMet Toolkit (www.comet-toolkit.org), which stands for Community Metrology Toolkit, is an open-source software project to develop Python tools for the handling of error-covariance information in the analysis of measurement data. It was developed to handle uncertainty and error-correlation information in EO data and propagate these through EO processing chains, but it is generally applicable to any Python dataset with uncertainties and error-correlation, and can propagate uncertainties through any measurement function that can be written as a Python function.

The CoMet toolkit consists of a set of linked Python packages, which are all publicly available and installable through pip. The punpy tool implements metrologically-robust uncertainty propagation, including handling of complex error-covariance information. This enables to calculate the total output uncertainty of a measurement function (i.e. a processing chain) from the uncertainty on its inputs, and to study the effects of various uncertainty contributions. The obsarray tool allows to store uncertainty and error correlation in a self-described dataset (dubbed a `digital effects table’) using standardised metadata. These digital effects tables can also be passed to punpy, which can directly use this information, so that users typically never have to interact with the complex error-correlation information. Using the CoMet toolkit, uncertainty and error-correlation information can be written, read, and processed in a way that is user-friendly, machine-readable and traceable.

Within this training session, we will provide some brief theoretical background and introduce you to the various CoMet tools and give hands-on experience of how to use these tools in order to propagate uncertainties through an example processing chain. We will use Jupyter notebooks, hosted on google colab, to run the training session (for a preview, see https://www.comet-toolkit.org/examples/). We will run through an example of calibrating a satellite sensor, and will show how the tools can be used for this purpose. We will calculate how different uncertainty components (e.g. noise with random error correlation, uncertainty on gain with systematic error correlation, …) contribute to the overall uncertainty budget, and show why it is relevant to take into account error correlation information. If time allows, we will also help you set up the use of CoMet through your own example. Please bring a (simple) example use-case in Python through which you would like to propagate uncertainties (e.g. one step of your own processing chain).

Speakers:


  • Pieter De Vis - National Physical Laboratory
  • Sam Hunt - National Physical Laboratory
  • Astrid Zimmermann - National Physical Laboratory
  • Maddie Stedman - National Physical Laboratory

Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 1.15/1.16)

Hands-On: F.01.12 HANDS-ON TRAINING - Communicating Climate Change with the ESA Climate Change Initiative’s Essential Climate Variables

The Global Climate Observing System (GCOS) has defined 55 Essential Climate Variables (ECVs) that critically contribute to the characterisation of the Earth's climate. Of these 55 ECVs, around two thirds can be derived using satellite data, providing users with near-global coverage over decadal time scales.

Since its conception in 2010, ESA’s Climate Change Initiative (CCI) has exploited the full satellite record to produce long-term climate data series for 27 ECVs, with some records now spanning over four decades. This wealth of data is invaluable for illustrating the causes and impacts of climate change at global scale and regional scales.

In this interactive training session, participants will be given a hands-on opportunity to use and explore the CCI data archive. This session will showcase how and where the Earth’s climate is changing and how these data can be used research and development and to raise public awareness of climate change.

Participants will discover how to access the ECVs using the ESA CCI Open Data Portal and explore various relevant uses of CCI data for climate change applications (e.g., using CCI’s ECVs to illustrate key impacts of climate change, such as rising sea levels or increasing trends in the frequency and intensity of extreme weather events). Training material will be provided to participants, with exercises accessible to different levels of programming proficiency, from complete beginner to more advanced levels. Additionally, the session will introduce the ESA CCI Toolbox, a powerful Python package that simplifies access to and operations with ESA CCI’s ECVs.

Speakers:


  • Amina Maroini - Research Associate, Imperative Space
  • Dr. Lisa Beck - Deutscher Wetterdienst
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Hall L3)

Tutorial: D.03.17 TUTORIAL - Cloud-Native Earth Observation Processing with SNAP and Copernicus Data Space Ecosystem CDSE

{tag_str}

This tutorial will provide participants with practical skills for deploying ESA’s SNAP in cloud environments, leveraging containerization, Python integration, and the Copernicus Data Space Ecosystem (CDSE). The 90-minute session combines conceptual foundations, live demonstrations, and guided exercises to enable operational EO data analysis directly within cloud infrastructure.

1. Introduction to SNAP and CDSE (15 minutes)
• SNAP Overview: Highlight new features, including enhanced Python support via snappy and SNAPISTA, containerized deployment options, dand hyperspectral ata support.
• CDSE Architecture: Explore the CDSE’s data catalog, processing tools, and Jupyter environment, emphasizing its role in reducing data transfer costs through in-situ analysis.

2. Containerized SNAP Deployment (15 minutes)
• Container Fundamentals: Contrast Docker containers with SNAP’s snap packaging, addressing isolation challenges (e.g., subprocess confinement) and scalability.
• Cloud Deployment: Walk through launching pre-configured SNAP containers on CDSE, including resource allocation and persistent storage setup.

3. Python-Driven Processing with SNAPISTA and Snappy (25 minutes)
• Snappy and SNAPISTA: Understand the low-level Java-Python bridge (snappy) and SNAPISTA’s high-level API for graph generation, including performance trade-offs.
• Operational Workflows: Build a Python script using SNAPISTA to batch-process Sentinel data on CDSE, incorporating cloud-optimized I/O and error handling.
• Integration with CDSE APIs: Retrieve CDSE catalog metadata, subset spatial/temporal ranges, and pipe results directly into SNAP operators without local downloads.

4. Jupyter-Based Analytics and Collaboration (20 minutes)
• Jupyter Lab on CDSE: Navigate the pre-installed environment, accessing SNAP kernels, GPU resources, and shared datasets.
• Reproducible Workflows: Convert SNAP Graph Processing Tool (GPT) XML workflows into Jupyter notebooks, leveraging snapista for modular code generation.
• Collaboration Features: Demonstrate version control, real-time co-editing, and result sharing via CDSE’s portal.

5. Best Practices and Q&A (15 minutes)
• Q&A: Address participant challenges in adapting legacy SNAP workflows to cloud environments.

Learning Outcomes: Participants will gain proficiency in deploying SNAP on CDSE, designing Python-driven EO pipelines, and executing scalable analyses without data migration. The tutorial bridges ESA’s desktop-oriented SNAP tradition with modern cloud paradigms, empowering users to operationalize workflows in alignment with CDSE’s roadmap.

Speaker


  • Pontus Lurcock - Brockmann
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.94/0.95)

Tutorial: A.03.09 TUTORIAL - EO AFRICA – Continental Demonstrator LUISA project: Human Appropriation of Net Primary Production Tutorial

Human activities significantly impact land productivity and carbon fluxes, primarily due to the increasing intensity of land use for food, feed, and raw material production in agricultural and forestry systems. African landscapes are undergoing rapid transformation driven by land-use intensification. Consequently, building the resilience of smallholder farmers and pastoralists to global population growth and the subsequent pressures on land is of critical importance.
This is the long-term goal of the Land Use Intensity’s Potential, Vulnerability, and Resilience for Sustainable Agriculture in Africa (LUISA) project, funded by the European Space Agency. The project focuses on the Human Appropriation of Net Primary Productivity (HANPP), an indicator that quantifies the proportion of Net Primary Productivity (NPP) consumed through human land use. HANPP provides key insights into the drivers and consequences of land-use intensification on ecosystem productivity.
LUISA has two primary objectives:
Develop a remote sensing-driven HANPP monitoring framework for key land cover types—cropland, forest, rangeland, and urban areas—within case study agroecosystems in Ethiopia, Mozambique, Senegal, and Uganda.
Scale up HANPP estimates across the African continent over extended spatial and temporal scales.
To enhance the accuracy of NPP estimates, the project will employ data assimilation techniques that integrate in situ and remote sensing observations to optimize parameters in JULES. HANPP is derived by comparing the NPP of actual vegetation—remaining after harvest and land-use conversion—with the NPP of potential natural vegetation, which represents the productivity of undisturbed ecosystems under current climatic conditions.
The project’s outputs, including results and intermediary products, will be made accessible through a tailored platform. Combined with continuous user engagement, this platform will facilitate the adoption of the HANPP monitoring framework. Ultimately, LUISA aims to support sustainable agricultural development while promoting nature conservation across African landscapes.
Join our tutorial where we will introduce you to the HANPP concept and platform.

Speakers:


  • Michael T. Marshall - Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente
  • Sarah Matej - Institute of Social Ecology (SEC) University of Natural Resources and Life Sciences, Vienna
  • Wai-Tim Ng - VITO NV
  • Luboš Kučera - Gisat s.r.o.
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Room 0.96/0.97)

Hands-On: B.01.06 HANDS-ON TRAINING - Unlocking Earth Observation Analytics: Hands-on Training with the Global Development Assistance (GDA) Analytical Processing Platform (APP)

This hands-on training session will offer participants a step-by-step, practical experience using the GDA APP, designed to simplify the use of EO in development aid activities and to accommodate both technical and non-technical users. The session will guide attendees through the platform’s main user interfaces and core functionalities, demonstrating how to process, analyse, and visualise Earth Observation (EO) data for real-world decision-making.

The training will begin with an introduction to the platform main interfaces and available tools, followed by interactive exercises where participants will explore practical use cases utilising the available GDA APP EO capabilities. The workshop aims to:
- Introduce users to the GDA APP, with a specific focus on the Capability Widgets and Explore interface.
- Demonstrate key capabilities of the GDA APP and raise awareness of their potential EO applications.
- Support capacity building by equipping attendees with the knowledge to integrate EO data into their daily workflows and decision-making processes.
- Encourage active engagement by allowing participants to explore EO capabilities firsthand and suggest improvements.
- Gather feedback on platform usability, front-end design, available tools, and ideas for future development.

The session will be highly interactive, promoting hands-on exploration while also collecting valuable feedback from participants on their user experience. This feedback will directly contribute to refining the platform, guiding future enhancements, and ensuring the GDA APP continues to meet the needs of its users. The session will also briefly introduce how new EO value adding applications can be integrated to the platform.

Participants will leave the session with an in-depth understanding of GDA APP and the tools available to support their work, while also having a direct influence on shaping the platform’s ongoing development.

We encourage all LPS participants to register and create an account on the GDA APP (https://app-gda.esa.int/) to fully explore its features. We especially recommend that training session attendees complete their registration in advance to familiarize themselves with the platform and make the most of the session.

Read more for additional details and updates:
https://app-gda.esa.int/user-guide
https://gda.esa.int/cross-cutting-area/app/

Speakers:


  • Hanna Koloszyc - GeoVille
  • Alessia Cattozzo - MEEO
  • Judith Hernandez - EarthPulse

Supporting team:


  • Simone Mantovani - MEEO
  • Fabio Govoni - MEEO
Add to Google Calendar

Sunday 22 June 15:30 - 16:50 (Foyer L3)

Hands-On: D.02.17 HANDS-ON TRAINING - Advanced Artificial Intelligence for Extreme Event Analysis: Hands-on with the AIDE Toolbox

We introduce the Artificial Intelligence for Disentangling Extremes (AIDE) toolbox that allows for anomaly detection, extreme event analysis, and impact assessment in remote sensing and geoscience applications. AIDE integrates advanced machine learning (ML) models and can yield spatiotemporal explicit monitoring maps with probabilistic estimates. Supervised and unsupervised algorithms, deterministic and probabilistic, convolutional and recurrent neural networks (CNNs and RNNs), and methods based on density estimation are covered by this framework.

This session is intended for researchers, data scientists, Earth observation specialists, and professionals in climate science, remote sensing, and AI-driven environmental monitoring. Participants should have a basic understanding of machine learning concepts and spatiotemporal data analysis, though no prior experience with the AIDE toolbox is required. Familiarity with Python programming and common data science libraries (e.g., NumPy, Pandas, PyTorch) will be beneficial but not mandatory, as step-by-step guidance will be provided. For the hands-on training, participants must bring their laptops with Python 3.8 or later installed, preferably within a conda or virtual environment. The training will use Jupyter Notebook or any Python IDE (e.g., VS Code, PyCharm) and the AIDE toolbox, with installation instructions and dependencies provided in advance (see https://github.com/IPL-UV/AIDE). A pre-configured dataset and setup guide will be shared two weeks before the session to ensure a smooth experience. Internet access is recommended for package installation and additional resources.

Speakers:


  • Miguel-Ángel Fernández-Torres - Department of Signal Theory and Communications, Universidad Carlos III de Madrid (UC3M), Madrid, Spain
  • Maria Gonzalez-Calabuig - Image Processing Laboratory (IPL), Universitat de València, Valencia, Spain
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.94/0.95)

Tutorial: A.01.15 TUTORIAL - Atmospheric Composition Training at the Living Planet Symposium

The annual ESA/ECMWF/EUMETSAT Atmospheric Composition Training aims to enhance the knowledge and skills of early career scientists in the field of atmospheric composition monitoring and modelling. Building on the heritage of this training (https://atmostraining.info/ ), this session at the ESA Living Planet Symposium will provide participants with a taste of these annual trainings, through a series of tutorials and demonstrations.

The session will cover the Earth Observation story, demonstrating observation to modelling, potentially covering topics such as:
•Explore atmospheric composition data from state-of-the-art observing systems such as Sentinel 5P TROPOMI.
•Understand the difference between observation and model output data
•Create forecasts of aerosols, atmospheric pollutants and greenhouse gases with atmospheric composition forecast models provided by the Copernicus Atmosphere Monitoring Service (CAMS).
•Analyse events such as dust transport, wildfire and volcanic emissions, and the impact these may have across different regions.
•Practical skills in using Python to interact with and plot data from satellites and models

Participants will gain hands-on experience and practical skills that can be directly applied to their research, with demonstrations of tool sets such as the Atmospheric Virtual Lab (https://atmospherevirtuallab.org/).
Overall, this tutorial session aims to foster collaboration and knowledge exchange among participants, helping them stay at the forefront of atmospheric composition research and contribute to the broader goals of the ESA Living Planet Symposium.
The course is targeted to undergraduate or post-graduate level students, researchers, professionals or anyone interested in furthering their knowledge of atmospheric composition monitoring and modelling and developing their practical skills in data handling. Some basic background in physics, chemistry, mathematics and computing is assumed, and elementary familiarity with Python programming would be beneficial to make the most of the training.

Speakers:


  • Edward Malina - ESA
  • Daniele Gasbarra - ESA
  • Chris Stewart - ECMWF
  • Dominika Leskow-Czyzewska - EUMETSAT
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.96/0.97)

Hands-On: F.04.33 HANDS-ON TRAINING - Monitoring the high seas – enhancing marine protection using transparency and technology

As global momentum builds for the creation and implementation of Marine Protected Areas (MPAs) in the high seas, attention is turning towards how these remote areas will be monitored in practice. The new BBNJ Agreement provides the legal framework for establishing MPAs beyond national jurisdiction, but ensuring effective compliance and enforcement will remain a challenge unless innovative monitoring and compliance tools and approaches are embraced. Advancements in satellite technology, remote vessel monitoring, and data transparency present game-changing opportunities to support area-based management tools (ABMTs) within and beyond national jurisdiction.

This session by Global Fishing Watch will provide hands-on training in Marine Manager, a powerful platform that integrates satellite data, vessel tracking, and analytical tools to enhance marine conservation, monitoring, and enforcement. It complements conference abstracts by IDDRI and BirdLife exploring the potential of satellite technology and vessel-based monitoring for high seas MPAs.

By the end of the session, participants will have:
- Explored Marine Manager and its capabilities for monitoring remote MPAs
- Understood the underlying automated methods used to create vessel related insights
- Analysed vessel-based data to assess human pressures in areas of interest
- Worked through real-world case studies to apply data-driven insights
- Discussed the practical applications for policy and management strategies

Outline:
- Introduction: Policy context and key monitoring challenges of remote MPAs
- Demonstration: Live walkthrough of Marine Manager’s key features and datasets.
- Hands-on Training: Participants will use the Global Fishing Watch platform, Marine Manager, learn about datasets available, analyse vessel data, and apply insights to real-world scenarios.
- Facilitated discussion: Open exchange on applications, challenges, and next steps.

This video gives a preview of Marine Manager and its functionalities: https://www.youtube.com/watch?v=-x67cHX5C-Q

Speakers:


  • Paul Tuda - Global Fishing Watch
  • Daniel Kachelriess - High Seas Alliance
  • Claudino Rodrigo - Global Fishing Watch
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.85/1.86)

Tutorial: D.03.11 TUTORIAL - Satellite Image Time Series Analysis on Earth Observation Data Cubes

This tutorial presents an overview of state-of-the-art methods for big Earth observation data analysis using satellite image time series. Topics include: (1) Access to big EO data cloud services; (2) Production of EO data cubes; (3) Combination of optical, SAR and DEM data sets for multi-sensor based analytics; (4) Generation of derived spectral, temporal an textural indices using EO data cubes; (5) Extraction of training samples for data cubes; (6) Quality control of training datasets using self-organised maps; (7) Methods for reducing imbalances in EO training samples; (8) Deep learning algorithms for classification of image time series organized as data cubes; (9) Post-processing of classification results using spatial Bayesian techniques; (10) Segmentation and region-based classification of image time series; (11) Best practices for evaluation of classification maps.

The tutorial is based on the online book "Satellite Image Time Series Analysis on Earth Observation Data Cubes" (https://e-sensing.github.io/sitsbook), which provides working examples of the above-described methods. The book uses the open-source R package sits. The software accesses data on Amazon Web Services, Brazil Data Cube, Copernicus Data Space Ecosystem, Digital Earth Australia, Digital Earth Africa, Microsoft Planetary Computer, NASA Harmonised Landsat-Sentinel, and Swiss Data Cube. It has reached TRL 9 and is being used operationally for large -scale land classification.

The examples to be presented will be based on Copernicus data sets available in CDSE, including Sentinel-1, Sentinel-2 and Copernicus DEM.

Attendees to the tutorial will be able to get an overview of the whole process of land classification using open EO data. They will be able complement the information provided in the tutorial by reproducing the examples of the on-line book after the tutorial at their best convenience.

Speakers:


  • Gilberto Camara - National Institute for Space Research (INPE), Brazil
  • Rolf Simoes - Open Geo Hub Foundation, Netherlands
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.61/1.62)

Tutorial: A.08.20 TUTORIAL - Satellite data for the UN Ocean Decade: Addressing the 10 "ocean challenges" with marine data from the Copernicus Programme and EUMETSAT

The United Nations Decade of Ocean Science for Sustainable Development, known more colloquially as the UN Ocean Decade, outlines ten challenges that must be addressed to ensure an ocean that is sustainably and equitably managed. Marine Earth observation data play a key role in addressing these challenges, providing operational data streams that contribute to monitoring a broad range of biological and physical processes. EUMETSAT and our Ocean and Sea Ice Satellite Applications Facility (OSI SAF) along with Mercator Ocean International, produce regular case studies showing how our data, produced either under the Copernicus Programme or via our mandatory missions, can be used to support these monitoring activities.
In this tutorial, we will explore some of these case studies, showing practical examples of how and where marine remote sensing can be used to address specific Ocean Decade challenges. Each example will be accompanied by a python-based Jupyter Notebook, which will allow participants to recreate and expand upon the analyses presented. The notebooks will be deployed on the Copernicus WEkEO DIAS JupyterHub, and made available under an open-source license, allowing them to be reused by participants in any future context. Examples will showcase EUMETSAT Sentinel-3 and Sentinel-6 products from the Copernicus marine data stream, those made available by our Ocean and Sea Ice Application Facility (OSI SAF) as well as downstream products from the Copernicus Marine Service (CMEMS). The tutorial will be supported by experts in the various data streams, who will be able to advise on data selection and product suitability across the broader marine portfolio.

Point of contact: ben.loveday@external.eumetsat.int

Session details:
This session is designed for early- and mid-career oceanographers and remote sensing scientists who have an interest in expanding their understanding of the uses of EUMETSAT and Copernicus marine data, as well as service providers and application developers focussing on the marine domain. The practical component of the tutorial will use a series of Python-based Jupyter Notebooks, hosted on the Copernicus WEkEO DIAS. A knowledge of Python and using notebooks would be advantageous, but is not strictly necessary.

Speakers:


  • Ben Loveday - EUMETSAT / Innoflair - EUMETSAT Copernicus Marine Training Service Manager
  • Hayley Evers-King - EUMETSAT - Lead Marine Applications Expert
  • Fabrice Messal - Mercator Ocean International - UX and Capacity Development Manager
  • Gwenaël Le Bras - Meteo France - OSI SAF communication and outreach officer
A.08.20 TUTORIAL - Satellite data for the UN Ocean Decade: Addressing the 10 "ocean challenges" with marine data from the Copernicus Programme and EUMETSAT&location=Room+1.61/1.62" class="text-info" target="_blank">Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.49/0.50)

Tutorial: D.01.10 TUTORIAL - Unlocking the Power of Destination Earth: A Guide to Data Lake Services

In this tutorial, you will learn how the Harmonised Data Access service streamlines data retrieval, ensuring seamless access to datasets from multiple sources including satellite imagery, climate models, and in-situ observations. We will then explore EDGE services, designed to bring computing closer to the data, reducing latency and enabling large-scale analytics. EDGE services consist of three core components:

STACK – A powerful environment featuring Jupyter Notebook and DASK, enabling interactive data analysis and distributed computing.
ISLET – An Infrastructure-as-a-Service (IaaS) solution providing scalable and distributed cloud-based computing resources to support intensive computational workloads.
HOOK – A workflow automation service that orchestrates data processing tasks, making it easier to manage complex workflows.

By the end of this tutorial, you will be equipped to navigate Data Lake Services, efficiently work with the Harmonised Data Access service and leverage EDGE services for advanced analytics. Whether you're a scientist, developer, or policymaker, this guide will help you unlock the full potential of Destination Earth Data Lake.

Let’s get started and turn data into actionable insights for a more sustainable future!

Speaker:


  • Michael Schick - EUMETSAT
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.14)

Hands-On: A.02.13 HANDS-ON TRAINING - Biodiversity Data Cubes for Earth Science: From SQL Queries to Standardized Geospatial Output

Lina M. Estupinan-Suarez, Henrique Pereira, Lissa Breugelmans, Rocio Beatriz Cortes Lobos, Luise Quoss, Emmanuel Oceguera, Duccio Rocchini, Maarten Trekels, Quentin Groom
This 90-minute hands-on session empowers researchers and biosphere analyst to harness data mobilised by the Global Biodiversity Information Facility (GBIF) data for advanced biodiversity analysis. Through active, step-by-step exercises, participants will learn how to create species occurrence cubes using SQL queries, calculate key biodiversity indicators, and convert these outcomes into a standardized geospatial format (EBVCubes) for enhanced ecological monitoring.
Session Outline:
1.Creating Species Occurrence Cubes (30 minutes including Q&A): Participants will start by extracting and organizing GBIF species occurrence data into structured data cubes using an SQL query. This segment emphasizes practical exercises, allowing attendees to work with real data and receive one-on-one guidance.
2. Ecological Modeling and Simulated Data Cubes (30 minutes including Q&A):
This part of the session will demonstrate how Virtual Suitability Data cubes can be generated and used in modeling workflows. Participants will explore a data structure that can be useful for analyzing changes in suitability of multiple species across time and space
3.Converting to Standard Geospatial Data (30 minutesin including Q&A): In the final segment, the outcomes from the previous steps will be transformed into EBVCubes—a standardized geo-spatial data format tailored for biodiversity applications. This ensures that the results are readily applicable for further analysis and decision-making.
Participants will gain hands-on expertise in biodiversity data processing and a deeper understanding of how integrative data facilities can bridge the gap between Earth observation and biodiversity research. This enriched perspective is critical for developing informed conservation strategies and policies in response to the complex challenges posed by the intertwined crises of biodiversity loss and climate change.

Speakers:


  • Quentin Groom - Biodiversity Informatics, Meise Botanic Garden
  • Rocio Beatriz Cortes Lobos - University of Bologna
  • Lina M. Estupinan-Suarez - German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.31/1.32)

Tutorial: D.05.07 TUTORIAL - Using Earth Observations within Climate Applications that are Fit for Your Purpose

The Copernicus Climate Change Service (C3S), operated by ECMWF on behalf of the European Commission, provides climate data and information based on scientific research. It offers around 35 catalogue entries derived from Earth Observation (EO), including multiple Climate Data Records (CDRs) accessible via the Copernicus Climate Data Store (CDS). The service is designed to simplify the discovery and access of data while meeting user requirements—whether for monitoring climate change, supporting policy development, or performing environmental studies. Consequently, datasets in the CDS adhere to best practices established internationally (e.g., by the Global Framework for Climate Services of the World Meteorological Organization).
In addition, C3S has developed an Evaluation and Quality Control (EQC) framework, to review technical and scientific aspects of service components by involving experts who assess each dataset’s documentation, usability, and maturity. The outcome is a set of clear quality statements that help users identify and work with the most suitable datasets for their purposes.
The EQC framework goes beyond traditional static reporting by offering dynamic, interactive tools that cater for varied user needs. Following Dee et al. (2024, BAMS), the system organises information into distinct tiers: one focused on detailed documentation (Quality Assurance, implemented as a compliance checklist), another on practical demonstrations of dataset performance (Quality Assessment, available as Jupyter notebooks), and a summary (Fitness for Purpose) that presents an overview of each dataset’s strengths and limitations.
Within this proposed Tutorial activity, EO products will serve as main examples to demonstrate how to access and engage with EQC information. It includes datasets from diverse domains (atmosphere, land, and ocean) and sectoral applications, such as forestry, urban planning, or climate monitoring. Practical tutorial examples, including downloadable Jupyter notebooks, will be presented, serving as both a means of independent verification and a learning tool for best practices in climate data applications.

Chair:


  • André Obregon – ECMWF

Speakers:


  • André Obregon – ECMWF –
  • João Martins – ECMWF
  • Joaquin Munoz – ECMWF
  • Chunxue Yang – CNR-ISMAR
  • Ana Oliveira – +ATLANTIC CoLAB
  • Inês Girão – +ATLANTIC CoLAB
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.34)

Hands-On: D.04.11 HANDS-ON TRAINING - JupyterGIS: Collaborative Geospatial Analysis in Jupyter

This tutorial introduces JupyterGIS, a web-based, collaborative GIS platform integrated with Jupyter notebooks. Participants will learn to edit geospatial data, visualize raster and vector layers, apply symbology, and use the Python API for spatial analysis. We will explore real-time collaboration features such as shared document editing, live cursor tracking, and geolocated comments. The session also demonstrates JupyterGIS integration with QGIS.

Learning Objectives:
- Understand the core features of JupyterGIS and how it facilitates collaborative GIS workflows.
- Learn how to load and analyze raster and vector datasets in JupyterGIS.
- Apply symbology and filtering tools to geospatial data.
- Use the Python API for automating spatial analysis.
- Explore real-time collaboration features, including shared editing and live discussions.

Takeaways:
- Hands-on experience with JupyterGIS for geospatial data analysis.
- Practical knowledge of collaborative GIS workflows.
- Understanding of how JupyterGIS integrates with Jupyter notebooks and QGIS.
- Awareness of future developments and opportunities to contribute to the JupyterGIS community.

Agenda & Timeline (90 minutes):
- Introduction to JupyterGIS (15 min)
- Hands-on session: Loading and visualizing geospatial data
- Applying symbology and filtering tools
- Using the Python API for geospatial analysis
- Real-time collaboration features in JupyterGIS
- Discussion and feedback: Use cases and feature requests

Requirements:
- A modern web browser (Google Chrome or Firefox recommended; Safari support is not guaranteed)
- Basic familiarity with GIS concepts (e.g., layers, symbology, spatial data formats)
- Some experience with Jupyter Notebooks and Python is beneficial but not required

Instructors:


  • Anne Fouilloux - Simula Research Laboratory
  • Tyler Erickson - VorGeo, Founder, Radiant Earth, Technical Fellow
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Hall L3)

Tutorial: D.04.12 TUTORIAL - Cloud optimized way to explore, access, analyze and visualize Copernicus data sets

{tag_str}

This Tutorial will be present how to leverage various APIs provided by the Copernicus Data Space Ecosystem (CDSE) to process Copernicus data in a cloud computing environment using JupyterLab notebooks. In the beginning, it will be shown how to efficiently filter data collections using the SpatioTemporal Asset Catalog (STAC) catalogue API and how to make use of the STAC API extensions to enable advanced functionalities such as filtering, sorting, pagination etc. Secondly, it will be presented how to access parts of Earth Observiation (EO) products using STAC assets endpoint and byte range requests issued to the CDSE S3 interface. In this respect, it will be discussed in details how to do it using the Geospatial Data Abstraction Library (GDAL) and how to properly setup GDAL setting to maximize the performance of data access via the GDAL vsis3 virtual file system. Further, it will be presented how to leverage the STAC API to build a data cube for the sake of the spatio-temporal analysis. Ultimately, it will be show how to analyse the data cube using an open-source foundation model coupled with freely accessible embeddings generated from the Sentinel EO data and how to visualize and publish results using the Web Map Service (WMS) service. The ultimate goal of this Tutorial is to empower users with the novel EO analytical tools that are provided by the CDSE platform.

Speaker:


  • Jan Musial, CloudFerro

Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.14)

Hands-On: D.02.16 HANDS-ON TRAINING - AI Foundation Models for Multi-Temporal and Multi-Modal EO Applications

Participants will gain a solid understanding of the principles of AI Foundation Models (FMs) and engage in hands-on training to develop and apply these models specifically for remote sensing and geoscience. The training will cover key aspects of geospatial data analysis and address challenges unique to Earth Observation (EO), such as processing multi-source and multi-temporal satellite remote sensing datasets. Participants will develop the skills needed to effectively integrate FMs across various stages of geoscience research and practical applications.

The teaching material will be based on the Fostering Advancements in Foundation Models via Unsupervised and Self-supervised Learning for Downstream Tasks in Earth Observation (FAST-EO) project, funded by the European Space Agency (ESA) Phi-Lab. This will provide participants with access to state-of-the-art resources and cutting-edge research, enabling them to engage with the latest advancements in foundation models for EO.

Participants will explore computing solutions for training and deploying FMs, learn to apply fine-tuning techniques to adapt models for EO applications, and build pipelines to deploy models into production environments while evaluating them on new datasets.

FAST-EO: https://www.fast-eo.eu/

Speakers:


  • Gabriele Cavallaro - Forschungszentrum Jülich and University of Iceland
  • Thorsteinn Elí Gíslason - Forschungszentrum Jülich
  • Thomas Brunschwiler - IBM Research Europe – Zurich
  • Jakub Nalepa - KP Labs
  • Agata Wijata - KP Labs
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Foyer L3)

Hands-On: A.10.06 HANDS-ON TRAINING - InSAR Time Series Analysis: Exploring SARvey and InSAR Explorer for Engineering Applications

InSAR is a key tool in engineering, enabling precise and timely evaluations of ground deformation and structural stability. This workshop provides a practical introduction to two open-source tools for InSAR time series analysis and visualization: SARvey and InSAR Explorer.
SARvey is a software package designed to perform single-look InSAR time series analysis, focusing on detecting and monitoring deformation in engineering applications, including dam stability assessment, road and railway monitoring, and urban deformation mapping at the building scale. This workshop covers a comprehensive SARvey workflow, including installation, parameter configuration, and advanced processing techniques, making it an ideal starting point for users new to InSAR as well as for experts seeking enhanced analysis capabilities.
InSAR Explorer complements SARvey as a QGIS plugin that facilitates the seamless integration of InSAR-derived deformation data into a Geographic Information System. The plugin provides intuitive tools for mapping, overlaying auxiliary datasets, and comparing outcomes from different processing workflows. Its user-friendly interface allows users to quickly visualize time series of deformation, generate interactive plots, and perform detailed assessments of the results.
The workshop will utilize notebooks hosted in a Google Colab environment to smoothly guide participants through the complete workflow, from software installation to executing real-world case studies using Sentinel-1 data. Attendees will learn how to modify processing parameters, interpret the resulting deformation time series, and utilize InSAR Explorer in QGIS for data visualization and analysis. Whether you are taking your first steps in InSAR processing or are an experienced practitioner exploring new tools, this workshop offers a comprehensive and interactive learning experience to advance your skills in Earth observation and deformation monitoring.

Speakers:


  • Andreas Piter - Institute of Photogrammetry and GeoInformation, Leibniz University Hannover
  • Mahmud Haghighi - Institute of Photogrammetry and GeoInformation, Leibniz University Hannover
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 0.11/0.12)

Tutorial: D.03.15 TUTORIAL - FAIR and Open Science with EarthCODE Integrated Platforms

{tag_str}

This hands-on tutorial introduces participants to FAIR (Findable, Accessible, Interoperable, Reusable) and Open Science principles through EarthCODE integrated platforms, using real-world Earth Observation datasets and workflows. We will begin by exploring the fundamentals of FAIR, explore the EarthCODE catalog, and apply a checklist-based FAIRness assessment to datasets hosted on EarthCODE. Participants will evaluate current implementations, identify gaps, and discuss possible improvements. Building on this foundation, we will demonstrate how integrated platforms such as DeepESDL, OpenEO, and Euro Data Cube (Polar TEP, Pangeo & CoCalc) can be used to create reproducible EO workflows. Participants will create and publish open science experiments and products using these tools, applying FAIR principles throughout the process. The tutorial concludes with publishing results to the EarthCODE catalog, showcasing how EarthCODE facilitates FAIR-aligned, cloud-based EO research. By the end of the session, attendees will have practical experience in assessing and improving FAIRness, developing open workflows, and using EarthCODE platforms to enable reproducible, FAIR and Open Science. Please register your interest for this tutorial by filling in this form: https://forms.office.com/e/yKPJpKV0KX before the session.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Anne Fouilloux - Simula Labs
  • Dobrowolska Ewelina Agnieszka - Serco
  • Stephan Meissl - EOX IT Services GmbH
  • Gunnar Brandt - Brockmann Consult
  • Bram Janssen - Vito
Add to Google Calendar

Sunday 22 June 17:00 - 18:20 (Room 1.15/1.16)

Hands-On: D.04.10 HANDS-ON TRAINING - Working with Sentinel Hub API-s in Copernicus Data Space Ecosystem Jupyter Lab

Copernicus Data Space Ecosystem (CDSE) is the official data hub and cloud processing platform for Sentinel Data. CDSE integrates instant data availability with API-s (Application Programming Interfaces), free virtual machine capacity (within a quota) and an open codebase. The CDSE Jupyter lab connects all three of these, providing an open space to learn, experiment and upscale Sentinel data processing. The Sentinel Hub API-s enable advanced raster calculations and even raster-vector integration to generate zonal statistics, all within the API request, running on the server side. Therefore, CDSE makes it significantly easier to get started and learn coding Earth Observation data analysis. This training will show how to access, analyze, visualize and download satellite imagery in the CDSE Jupyter Lab using the Sentinel Hub API family. We will start with an introduction suitable for newcomers to coding. We will explore the Catalog, Process and Statistical API-s, and learn how to create scaleable end-to-end processing on practical use cases. We will use openly available tutorial notebooks that demonstrate how you can perform time series analysis and calculate long-term statistics without downloading a single satellite image. After the course, participants will be able to create their own data analysis pipelines, making use of the vast repository of open algorithms available and the capacity of CDSE. The participants are expected to use their own laptop, but only a web browser is needed, no other software installation is necessary.

Instructors:


  • András Zlinszky - Community Evangelist, Sinergise Solutions
  • William Ray - Remote Sensing Engineer, Sinergise Solutions
Add to Google Calendar

Tuesday 24 June

1140 events

Tuesday 24 June 08:30 - 10:00 (Room 1.31/1.32)

Session: F.02.12 Achieving EO uptake in Latin America and Caribbean through partnerships

ESA is partnering with the Directorate General for International Partnerships (DG-INTPA) under the Global Gateway initiative to address disaster risks through the use of Earth Observation (EO) in the LAC region. The overall objective is to transfer EO skills and expertise to Latin America and Caribbean (LAC) through the joint development of a regional CopernicusLAC Centre. The initiative aims initially to enhance the resilience of the Latin America and Caribbean region by making use of the Copernicus data and services with a primarily focus on disaster risk and recovery (DRR) activities.

The CopernicusLAC Centre is currently co-developing a range of EO services for DRR with 14 mandated organizations in the region (addressing floods, drought, wildfires, landslides, subsidence and exposure) and knowledge transfer at continental level through trainings, hackathons and private sector engagement. These activities will help shaping the LAC ecosystem that will be developed around the Centre. The objective is to demonstrate how the CopernicusLAC Centre can fulfill the needs of regional and international organisations with a mandate in DRR (UNDRR, CEPREDENAC, CDEMA) as well as in the LAC national DRR entities. Therefore, this session will showcase how CopernicusLAC is:
• Supporting a regional ecosystem of EO stakeholders, from government agencies to researchers and civil society, through targeted engagement and training;
• Translating local challenges into operational services, including pilot applications in areas such as disaster risk management, environmental monitoring, and urban resilience;
• Building a sustainable bridge between European EO capabilities and LAC priorities, underpinned by co-designed platforms and strategic policy alignment.

Speakers:


Intro


  • Alex Chunet - ESA, Earth Observation Applications Engineer

CopernicusLAC Panama Center


  • Claudia Herrera - LAC Panama center, Liaison Officer of the Copernicus

CopernicusLAC Stakeholder engagement and Knowledge activities


  • Nicolás Ayala Arboleda - Novaspace, Consultant
  • Jesús Carrillo Vázquez - Novaspace, Consultant

CopernicusLAC Service Development activities:


  • Alberto Lorenzo - Indra, Project Manager
  • Caterina Peris - Indra, Senior Engineer
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 1

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Overview of SWOT ocean surface topography performance

Authors: Emeline Cadier, Pierre Prandi, Dr Francesco Nencioli, Benjamin Flamant, Etienne Jussiau, Matthias Raynal, Albert Chen, Alexander Fore, Curtis Chen
Affiliations: CLS, CNES, Jet Propulsion Laboratory, California Institute of Technology
The launch of the Surface Water and Topography Mission (SWOT) in December 2022 represented a major breakthrough in satellite altimetry. Its Ka-band Radar Interferometer (KaRIn) provides for the first time 2-dimensional images of ocean surface topography, at kilometer resolution and over a 120 km wide swath. These novelties represent a main paradigm shift also for Cal/Val activities compared to conventional 1-dimensional nadir altimeters. As part of the mission performance activities, the quality and performances of the level 2 ocean surface topography have been assessed and monitored throughout the first two years of the mission. The analysis is performed using 2 km resolution products collected during both the calval (1-day repeat orbit) and the science (21-day repeat orbit) phases. Observations from SWOT nadir altimeter and other current altimeters, such as Sentinel-3, during the same period are also used as terms of comparison for our analysis. All mission performance metrics highlight the excellent performance of KaRIn measurements. Here we will provide a synthesis of this assessment. First, a brief summary of the data availability and validity throughout the mission lifetime will be given, listing the main events impacting KaRIn data. The end-to-end performance metrics on the main topographic mission variables (e.g. sea level, significant wave height, sigma0) will then be presented. These include residual estimates at crossovers as well as analysis of the wavenumber spectrum. The assessment of the cross-calibration algorithm at level 2 will also be presented. In the process of continuously improving the data quality, the processing of KaRIn ocean topography measurments has undergone several updates over the last year. In October 2024, the product generation executable (PGE) has been upgraded from PIC version to PIC2. It will be further upgraded to PID in January 2025. This version will be used for the next full mission reprocessing, planned for the first half of 2025. The main evolutions introduced with these upgrades, such as state of the art geophysical correction models and the novel KaRIn-dedicated wave algorithm, will be presented along with their impact on the data performances. Finally, the remaining known limitations of interest to science users will be discussed.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Performances of the Swath Altimeter SAOOH on board the Sentinel 3 Next Generation Topography Mission

Authors: Alexandre HOUPERT, Franck Demeestere, Dr Laurent Phalippou, Marc Deschaux-Beaume, Laurent Rys, Laurent Rey, Pierre Dubois, Laiba Amarouche, Pierre Thibaut, Pierrik Vuilleumier, Alejandro Egido
Affiliations: Thales Alenia Space, CLS, ESA/ESTEC
The Sentinel-3 Next Generation Topography mission, addresses the need for a timely extension of the current Sentinel-3 capability in terms of stability and continuity, while improving performances and increasing the quantity and quality of geophysical products. Sentinel-3 Next Generation belongs to the Copernicus “enhanced continuity” missions of the European Union. The baseline concept relies on Thales Alenia Space state of the art technology with an altimetry payload composed of Poseidon-5 (POS5) a SAR nadir altimeter for continuity, and SAOOH the Swath Altimeter for Operational Oceanography and Hydrology for enhancing sampling, coverage, revisit, and enhanced topography product. In order to achieve the specified five days revisit time, two satellites will operate simultaneously, on a dawn-dusk sun synchronous orbit, with same ground tracks as for Sentinel 3 First Generation. SAOOH is currently developed, under ESA contract, by Thales Alenia Space and benefits of heritage from CryoSat (SIRAL), Sentinel-3 (SRAL), Sentinel-6 (POS4), SWOT (KaRin) and Cristal (IRIS) missions. SAOOH improves the spatial/temporal sampling of the ocean and inland waters with respect to nadir altimeters. The products are available over a typical swath of 120 km centred on nadir. Swath altimeters also permit to observe continental water extent, and to measure surface elevation and river slopes. New observation capabilities for significant wave height (SWH), wave spectrum, sea ice and land ice are anticipated as shown by preliminary SWOT data. SAOOH is a Ka-band multibeam swath altimeter using one transmit antenna and two receive antennas with a 3m interferometric baseline. The SAOOH antenna design results in high signal to noise ratio (SNR) to comply with the random error requirement while keeping a relatively short interferometric baseline. The thermally regulated low noise amplifier front-ends (LNA) integrated very close to the antenna feeds further maximises the SNR. It is then possible to accommodate SAOOH without deployment mechanism, leading to a simple mechanical design and excellent antenna stability. This paper addresses the SAOOH design with a focus on achievable performances at radar level and the related end-to-end geophysical product accuracies (L2 level). The measurement principle, including the external calibration is addressed. Random errors and systematic errors are also presented and discussed with respect to the mission requirements.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: SWOT validation and analyses during the fast-sampling phase in the western Mediterranean Sea with high-resolution observations

Authors: Laura Gómez Navarro, Elisabet Verger-Miralles, Daniel R. Tarry, Barbara Barcelo-Llull, Nikolaos Zarokanellos, Lara Diaz-Barroso, Guiomar Lopez, Irene Lizaran, Emma Reyes, Baptiste Mourre, Ananda Pascual
Affiliations: IMEDEA (UIB-CSIC), Applied Physics Laboratory and UW, SOCIB
The FaSt-SWOT field campaigns sampled the western SWOT pass in the western Mediterranean cross-over region of the fast-sampling phase. Two campaigns took place between 25-28 April and 7-10 May 2023, with the aim of collecting multi-platform in situ observations of meso- and submesoscale ocean structures in the area covered by the SWOT satellite during its initial fast-sampling phase. The data collected during the campaigns included both using multi-scale ship-based instruments (CTD, Moving Vessel Profiler, thermosalinograph, ADCP) and autonomous platforms (surface drifters and gliders). Complementary information was also retrieved from satellite observations (SST, ocean colour and nadir altimetry products). The sampling focused on a small (~20 km in diameter) anticyclonic eddy detected under the SWOT swath thanks to satellite imagery and drifter trajectories. Several cross-sections of the ship-based instruments, namely the Moving Vessel Profiler, and underwater gliders provided insights into the structure of temperature and salinity fields and the associated signals in chlorophyll and dissolved oxygen. This allowed the comparison of the in situ measurements with the observation of this small-scale eddy by SWOT. Two gliders were programmed to perform back-and-forth sections during a 3-week time with a 1-day delay between them. This gives us the opportunity to evaluate the temporal variability of the ocean fields at the same frequency as SWOT’s fast-sampling phase repeat cycle time. In total, 45 surface drifters were deployed during the two phases to evaluate in situ surface currents and their associated convergence and divergence in the vicinity of the eddy. The continuity of the drifter dataset after the campaigns allows us to further evaluate the SWOT data and the surface dynamics by comparing surface drifters (CARTHE and HEREON) with the 15m-drogued drifters (SVP-B). Furthermore, this dataset also allows to analyse the velocities derived from SWOT and evaluate the limit of the geostrophic assumption in or region of study. Lastly, to evaluate the improvements of SWOT, we compare the fields with the DUACS dataset. Our observations provide new insights into SWOT's ability to detect fine-scale structures, including previously unobserved features and improved characterization of structures identified in earlier altimetric products. While conventional altimetry already could partly detect the sea level signature of the eddy observed during the campaigns, initial SWOT measurements indicate an improved detection capability by this new satellite. In addition, we can take advantage of regional, high resolution numerical simulations which reproduce a small anticyclonic eddy with similar characteristics as that of the observed eddy. These simulations are used to provide a more general understanding of the situation and helps us to evaluate SWOT observations before/after the field campaigns. This study presents an overview of the FaSt-SWOT dataset, offering a multi-platform perspective for validating and comparing SWOT observations with in situ and remote sensing data. The data obtained during the FaSt-SWOT project, in addition to helping with SWOT’s Cal/Val activities, sheds light onto better characterizing and understanding the fine-scale dynamics not observed before in this region characterized by a small Rossby radius of deformation. We also highlight the impacts of the SWOT swath corrections that can affect the proper characterization of small mesoscale structures, and how wide-swath altimetry is opening too the door to new challenges.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: The Sentinel-3 Next Generation Topography Copernicus Altimetry Mission: Enhancing Continuity, Performance and Observational Capabilities

Authors: Dr. Alejandro Egido
Affiliations: European Space Agency
The European Space Agency (ESA), under EU Council and Parliament directives, is mandated to define the Copernicus space component (CSC) architecture based on user requirements coordinated by the Commission. In collaboration with the EC, EUMETSAT, and Member States, ESA has identified key elements of the CSC Long-Term Scenario (LTS) (ESA, 2020). A pivotal element of the CSC-LTS is the Next Generation Topography Constellation, comprising the Sentinel-3 Next Generation Topography (S3NG-T) mission. The S3NG-T mission is designed to ensure an enhanced continuity of the Copernicus Sentinel-3 nadir-altimeter measurements in the 2030s-2050s timeframe. Recognizing the current constellation's limitations in temporal and spatial coverage, S3NG-T aims to significantly upgrade global-scale altimeter sampling. Additionally, hydrology has been elevated as a primary mission objective, introducing a new set of stringent requirements to the mission. To achieve the sampling and revisit time requirements, the S3NG-T mission is being designed a constellation of two large spacecraft, embarking a nadir-looking synthetic aperture radar (POS-5) altimeter for baseline continuity and an across-track interferometer swath altimeter (SAOOH). The two spacecrafts will fly in a sun-synchronous orbit (6pm local time ascending node (LTAN), at a mean orbital height of 815 km, with an orbital phase difference of 140 deg, achieving an interleaved ground-track between both satellites, and matching the Sentinel- 3A/3B ground tracks. Featuring two continuous swaths of 50 km on each side of the track, the S3NG-T mission is expected to provide an almost complete coverage of the ocean every 5 days. There are two main operational modes for the SAOOH instrument: a low-resolution (LR) mode, meant for open ocean and land ice; and a high-resolution (HR) mode meant for coastal zones, sea ice, in-land water and land ice margins. For across-track swath interferometers, errors in baseline length and attitude knowledge translate into systematic SSH errors. As an example, a roll error translates directly into uncertainty in the angle of arrival and causes a linear SSH error across the track. A roll error knowledge of 1 μrad corresponds to ~ 6 cm of SSH error at the outer edge of the swath, and to ~3.8 cm rms of SSH within swath. Ensuring an absolute knowledge error, of the baseline vector roll, better than 1 μrad is challenging and cannot rely on the AOCS and instrument performances alone, hence, in-orbit cross-calibration should be applied to meet the performance requirements. This presentation covers the essential elements of the mission, design considerations, and overall performance for the S3NG-T mission. ESA, 2020,“The next phase of Copernicus” Updated Copernicus Space Component (CSC) Long Term Scenario, ESA/PB-EO(2020)41 Paris, 9 September 2020
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: SWOT mission overview and status

Authors: François Boy
Affiliations:
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L3)

Presentation: Performance Assessment of the Copernicus Sentinel-3NG Topography mission

Authors: Pierre THIBAUT, Pierre DUBOIS, Laiba Amarouche, Franck Demeestere, Laurent Phalippou, Alejandro Egido, Pierrik Vuilleumier
Affiliations: Collecte Localisation Satellites, Thalès Alenia Space, ESA
The Sentinel-3 Next Generation Topography (S3NG-T) mission addresses the need for a timely extension of the current Sentinel-3 capability in terms of stability and continuity, while increasing the quantity and quality of products and services. It will provide fundamental measurement data in an operational context to enable Copernicus Services to achieve their aim and objectives. In the frame of the S3NG-T Phase A/B1 study, CLS was in charge of assessing the performances of the mission over different targets on earth (ocean, continental water, polar regions). This assessment was made possible by the development of several complementary simulation tools taking into account the instrumental design and the different processing applied on the measurements (on-board and on-ground) and accounting for geophysical properties of the targets (different sea state conditions, inland water and sea ice geometries and backscattering contrasts). In particular, large scale ocean simulations including cross-over calibration method have allowed to assess the impact of various platform attitude characteristics on the final performance. At the end of Phase A, ESA decided to embark swath altimetry technology onboard S3NG-T mission which is designed as a constellation of two satellites, each of them hosting a swath altimeter and a nadir altimeter. This choice was confirmed at the Mission Gate Review held in spring 2024, bolstered by the excellent results of the CNES/NASA altimetry SWOT mission. This presentation will address the different elements contributing to the final Sea Surface Height performance (water surface height in hydrology) of SAOOH including altimeter random error on range, geophysical correction errors, interferometer pointing errors. All the results that have been obtained in this study confirm that the S3NG-T mission performance requirements will be fully fulfilled.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Session: A.06.01 Geospace dynamics: modelling, coupling and Space Weather - PART 1

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: The geomagnetic and ionospheric effects of the May 2024 Mother’s Day superstorm over the Mediterranean sector

Authors: Luca Spogli, Tommaso Alberti, Paolo Bagiacchi, Lili Cafarella, Claudio Cesaroni, Gianfranco Cianchini, Igino Coco, Domenico Di Mauro, Rebecca Ghidoni, Fabio Giannattasio, Alessandro Ippolito, Carlo Marcocci, Michael Pezzopane, Emanuele Pica, Alessio Pignalberi, Loredana Perrone, Vincenzo Romano, Dario Sabbagh, Carlo Scotto, Sabina Spadoni, Roberta Tozzi, Massimo Viola
Affiliations: Istituto Nazionale Di Geofisica e Vulcanologia, Alma Mater Studiorum - Università degli studi di Bologna
On 8 May 2024, the solar active region AR13664 started releasing a series of intense solar flares. Those of class X released between the 9 and 11 May 2024 gave rise to a chain of fast Coronal Mass Ejections (CMEs) that proved to be geoeffective. The Storm Sudden Commencement (SSC) of the resulting geomagnetic storm was registered on 10 May 2024 and it is, to date, the strongest event since November 2003. The May 2024 storm, named hereafter Mother's Day storm, peaked with a Dst of -412 nT and stands out as a “standard candle” storm affecting modern era technologies prone to Space Weather threats. Moreover, the recovery phase exhibited almost no substorm signatures, making the Mother's Day storm as a perfect storm example. in this paper we concentrate on the Space Weather effects over the Mediterranean sector, with a focus on Italy. In fact, the Istituto Nazionale di Geofisica e Vulcanologia manages a dense network of GNSS receivers (including scintillation receivers), ionosondes and magnetometers in the Mediterranean area, which facilitated for a detailed characterization of the modifications induced by the storm. Concerning the geomagnetic field, observatories located in Italy recorded a SSC with a rise time of only 3 minutes and a maximum variation of around 600 nT. The most notable ionospheric effect following the arrival of the disturbance was a significant decrease in plasma density on 11 May, resulting in a pronounced negative ionospheric storm registered on both foF2 and Total Electron Content. Another negative effect was recorded on 13 May, while no signatures of positive storm phases were reported. These negative ionospheric phases are ascribed to neutral composition changes and, specifically, to a decrease of the [O]/[N2] ratio. The IRI UP IONORING data-assimilation procedure, recently developed to nowcast the critical F2-layer frequency (foF2) over Italy, proved to be quite reliable during this extreme event, being characterised just by an overestimation during the main phase of the storm, when the electron density and the height of the F region, decreased and increased respectively. Relevant outcomes of the work related to the Rate of TEC change Index (ROTI), which shows unusually high spatially distributed values on the nights of 10 and 11 May. The ROTI enhancements on 10 May might be linked to Stable Auroral Red (SAR) arcs and an equatorward displacement of the ionospheric trough. Instead, the ROTI enhancements on 11 May might be triggered by a joint action of low-latitude plasma pushed poleward by the pre-reversal enhancement (PRE) in the post-sunset hours and wave-like perturbations propagating from the north. Furthermore, the storm generated immediate attention from the public to Space Weather effects, including mid-latitude visible phenomena like SAR arcs. This paper outlines the report of the Space Weather Monitoring Group (SWMG) of the INGV Environment Department and its effort to disseminate information about this exceptional event.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: The SMOS L-Band Solar Radio Burst Database

Authors: Federica Guarnaccia, Roberta Forte, Raffaele Crapolicchio
Affiliations: Serco Italia SpA, ESA-ESRIN
The SMOS (Soil Moisture and Ocean Salinity) mission was launched in 2009 and has been collecting full polarimetric brightness temperature images at 1.413 GHz for over 15 years. Most acquisitions include information about the Sun signal: the latter has thus been removed and used to evaluate the solar flux at high time resolution, with one measurement cycle every 5 seconds. This resolution is compatible with the detection of solar radio bursts (SRBs), enabling the development of a dedicated detection algorithm. The full SMOS solar radio burst database has been compared to the Radio Solar Telescope Network (RSTN) L-Band channel, revealing similar yet complementary performances. The detection rate of the SMOS mission is mainly hindered by two factors: 1) Radio Frequency Interference (RFI) in the L-Band, which deteriorates the quality of the collected signal, preventing the detection of smaller events; 2) the Sun signal transitioning from the antenna front lobe to the back lobe, greatly limiting the signal-to-noise ratio. The extension of the detection algorithm in the antenna’s back lobe is discussed; depending on the event’s intensity and the Sun elevation angle, solar radio burst detection is still possible, but with a reduced success rate. Radio burst events missed by the RSTN L-Band channel and detected by SMOS are analyzed and correlated with the full RSTN database, including both higher and lower frequency channels. The SMOS database has also been compared to the Geostationary Operational Environmental Satellite (GOES) X-ray flare list. Thanks to the Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) payload, the SMOS radio burst database includes a unique additional layer of information: the Degree of Circular Polarization (DoCP). Prominent polarized radio burst events in the SMOS database have been identified: the DoCP may indicate the source radiation mechanisms and propagation processes originating the events. The polarized radio burst subset has been cross-correlated with announced contemporary Global Navigation Satellite Systems (GNSSs) signal degradation: only right-hand circularly polarized events are expected to yield a significant impact on GNSSs. Overall, despite challenges such as RFI and variable signal-to-noise ratios across its orbit, the SMOS satellite provides a valuable asset for solar radio burst detection in the L-Band, both as a stand-alone system and in synergy with state-of-the-art databases.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Introducing QUID-REGIS: Contribution to the understanding of unexpected variability in the ionosphere during solar-quiet periods by atmospheric dynamics from below.

Authors: Lisa Küchelbacher, Jaroslav Chum, Patrick Hannawald, Petra Koucka Knizova, Jan Kubancak, Simon Makovjak, Carsten Schmidt, Vladimir Truhlik, Jaroslav Urbar, Sabine Wüst, Michael Bittner
Affiliations: German Aerospace Center, DLR-DFD, Institute of Atmospheric Physics, CAS-IAP, Slovak Academy of Sciences, Institute of Experimental Physics, IEP-SAS
The day-to day variability of quiet-time ionosphere is surprisingly high even during periods of negligible solar forcing. Swarm measurements allow the characterization of the upper atmospheric and ionospheric conditions and dynamics for more than 10 years now. The analysis of Swarm data also showed that the ionosphere is sometimes disturbed even during solar-quiet periods: the electron density and electric field, for instance, can show significant variability that currently remains unexplained. Whenever there is unexpected variability in Swarm data the lower atmospheric dynamics might serve as a source region of disturbances causing these variabilities by vertical coupling processes. With QUID-REGIS we aim for a better quantification of the role of upper mesosphere lower thermosphere (UMLT) dynamics in the occurrence of solar quiet ionospheric disturbances along with a better representation of baseline ionospheric conditions. Use of Swarm data measured in topside ionosphere is supported by an extensive set of ground-based measurements of both, the upper mesospheric/ lower thermospheric and ionospheric D-, E- and F-regions. These measurements comprise airglow observations representative for the neutral atmosphere in the UMLT (80-100km), magnetic field (and other) observations representative for the ionosphere (85-300km) as well as airglow observations from 200-300km of altitude. Therewith, we contribute to characterize the atmospheric state during these quiet periods. Thus, QUID-REGIS contributes to the understanding of disturbances in the upper atmosphere and clarifies whether these are at least in part a result of neutral atmospheric dynamics from the lower atmosphere at mid-latitudes. The obtained findings are used to modify inputs and compare possible improvements of the International Reference Ionosphere Model. We give an overview about the main project goals, challenges and first results. This includes basically three steps: First, we identify solar-quiet periods, by selecting various solar parameters. Second, we look for unexpected high variability of Swarm measurements. Third, with this we select case studies for a detailed analysis of the dynamic state of the atmosphere below.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Augmenting thermosphere mass density and crosswind observations derived from accelerometer and GNSS tracking data with uncertainty information

Authors: Natalia Hladczuk, Sabin Anton, Dr.ir. Jose van den IJssel, Dr.ir. Christian Siemes, Prof.dr.ir. Pieter Visser
Affiliations: Delft University of Technology
Accurate knowledge of the thermosphere mass density and crosswind is essential in understanding the coupling of Earth's thermosphere-ionosphere and in constructing empirical models used in space operations. Accelerometer measurements, together with the precise GNSS data, allow for deriving in-situ neutral thermosphere mass density and crosswind. TU Delft maintains a database of precise thermosphere density and crosswind observations, which we continuously strive to improve by upgrading processing components. Currently, the datasets from the CHAMP, GRACE, GOCE, Swarm, and GRACE-FO satellites are included. These datasets are, however, provided without comprehensive uncertainty specifications. Quantifying the observational uncertainty is complex, considering the number and diversity of error sources. Recently a method was developed to propagate various error sources (such as measurement noise and errors in the satellite specification, thermosphere models, and radiation flux data) and quantify their impact on thermosphere density derived from the accelerometer and GNSS tracking data (Siemes et al., 2024). While the method was successfully applied to a few sample datasets from the GRACE-B satellite, the current implementation treats errors in atmospheric conditions such as temperature, density of constituents, and wind in a simplified way. In this presentation, we propose an extended and more advanced approach to model these often correlated variables. Moreover, we will extend the current method to quantify the uncertainty not only of density observations but also of crosswind observations. Finally, we will demonstrate the method output by analyzing various use cases, e.g. for missions such as GOCE and GRACE-FO, for which the new density and crosswind data were recently processed and released. The developed method will allow to supplement existing datasets in the uncertainty information. This will be beneficial both for the data users, as well as the data assimilation purposes. Moreover, the method can be used to predict the capacity of future missions to observe thermosphere density and crosswind.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Ionospheric Joule heating and neutral density variations at low Earth orbits during geomagnetic storms

Authors: Heikki Vanhamäki, Marcus Pedersen, Anita Aikio, Lei Cai, Milla Myllymaa
Affiliations: University Of Oulu
Auroral Joule heating is one of the main energy sinks in the solar wind - magnetosphere - ionosphere system. During geomagnetic storms intense Joule heating causes thermal expansion of the upper atmosphere, thus increasing the thermospheric density and satellite drag at low Earth orbits (LEO). This chain of events often begin as geoeffective solar wind transients such as high-speed stream/stream interaction regions (HSS/SIR) or interplanetary coronal mass ejections (ICME) impact Earth’s space environment. The “Joule heating effects on ionosphere-thermosphere coupling and neutral density (JOIN)” research project is part of ESA’s “4D Ionosphere” initiative. In the JOIN projects we determine the statistical distribution of the auroral Joule heating int he northern hemisphere during geomagnetic storms using SuperMAG, SuperDARN and AMPERE data. This is correlated with the large-scale atmospheric density variations at LEO observed by the Swarm, GRACE and GRACE-FO satellites. The geomagnetic storms are further divided into different categories based on the solar wind driver - HSS/SIR, and magnetic clouds or sheat regions inside ICMEs. Based on superposed epoch analysis of 231 geomagnetic storms between 2014 and 2024, it is found that the Joule heating in the ionospheric E-region and neutral density enhancements at the altitude of the Swarm and GRACE satellites show different characteristics depending on the geomagnetic storm driver. The Joule heating has a faster increase at the beginning of the storm main phase when the storm is initiated by a HSS/SIR or sheath region of ICMEs, while a more gradual and longer lasting increase is found in storms driven by magnetic clouds within ICMEs. This is inline with previous results of the total field-aligned and ionospheric currents during storms (Pedersen et al., 2021, 2022). The thermospheric density enhances gradually during the storm main phase and the enhancements are typically largest and longest-lasting for storms driven by MC due to the prolonged interval of increased Joule heating.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E2)

Presentation: Vertical Total Electron Content maps from SMOS radiometric data: Analysis of geomagnetic storms

Authors: Verónica González-Gambau, Dr. Nuria Duffo, Dr. Lorenzo Trenchi, Dr. Ignasi Corbella, Roger Oliva, Manuel Martín-Neira, Raffaele Crapolicchio
Affiliations: Barcelona Expert Center, Institute Of Marine Sciences, Csic, Universitat Politècnica de Catalunya, ESA-European Space Research Institute, Zenithal Blue Technologies, ESA-European Space Research and Technology Centre
As microwave radiation from Earth propagates through the ionosphere, the electromagnetic field components are rotated at an angle, called the Faraday Rotation Angle (FRA). At the SMOS operating frequency (1.4135 GHz), the FRA is not negligible and must be compensated for to get accurate geophysical retrievals. FRA can be estimated using a classical formulation [Le Vine et al., 2002] that makes use of the total electron content and geomagnetic field data provided by external databases and models. Alternatively, it can be retrieved from full polarimetric radiometric data [Yueh et al., 2000]. The possibility of retrieving the FRA from SMOS radiometric data opens up the opportunity to estimate also the VTEC (Vertical Total Electron Content) of the ionosphere by using an inversion procedure from the measured FRA in the SMOS field of view. However, estimating the Faraday rotation from SMOS radiometric data per each pixel is not straightforward because of the presence of spatial errors in SMOS images. A new methodology was proposed in order to derive VTEC maps from the SMOS radiometric measurements, which uses optimized spatio-temporal filtering techniques to be robust against the thermal noise and image reconstruction artifacts present in SMOS images [Rubino et al., 2020; 2022]. These derived VTEC maps can be then re-used in the SMOS level 2 processor for the correction of the FRA in the mission. We generated three years of SMOS-derived VTEC maps and, using these maps instead of the VTEC data from GPS measurements, we analyzed the impact on the stability of brightness temperatures over the oceans. Results of this analysis showed that the usage of these new SMOS-derived VTEC maps allowed a significant enhancement in the quality of the brightness temperatures, which will lead to an improvement on salinity retrievals. More recently, further improvements have been introduced in the methodology: (i) applying filtering/Radio-Frequency Interferences mitigation techniques and (ii) estimating the uncertainty of the derived VTEC products to improve the quality of the derived VTEC maps over strongly contaminated regions. This methodology will be implemented in the SMOS L1 processor to operationally generate the new SMOS-derived VTEC product. Solar activity can influence the ionization levels in the upper atmosphere, leading to variations in the ionosphere properties. Additionally, space weather events, such as solar flares and geomagnetic storms, can further contribute to ionospheric disturbances [Zhai, 2023]. In this context, VTEC maps derived from satellite observations can be very useful for studying the ionospheric response to geomagnetic storms, complementing the measurements from ground stations. Besides the SMOS VTEC maps, the Swarm magnetic field mission, launched in 2013, is also providing VTEC data. Currently, we are comparing VTEC data from SMOS and Swarm satellites taking into account they are obtained by using different technologies (GNSS signals in the case of Swarm and microwave radiometer in SMOS) and explore different layers of the ionosphere (Swarm: ~500 km upward, SMOS: ~750 km downward). First results show that under quiet geomagnetic conditions, VTEC data from both missions are consistent, with larger SMOS VTEC values (as expected since it measures the lower and denser ionospheric layers) and both clearly measure the intensification of density around the magnetic equator. We have also used Swarm and SMOS VTEC data for monitoring the geomagnetic storms (for example, storms on November 2021 and May 2024). Preliminary results show that both capture a clear intensification of VTEC, with peaks spreading poleward during the main and early recovery phases of the storms.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Annual User Review: Co-Creating the Copernicus Data Space Ecosystem

This session centers on the important role of users in actively shaping the Copernicus Data Space Ecosystem. By co-creating tools, services, and data solutions, users are directly influencing how the ecosystem grows to meet real-world needs. Additionally, through their feedback, users are shaping the way the consortium develops the tools and datasets offered in the ecosystem. We will showcase feedback from our user community and share specific examples of how these insights have led to impactful improvements and innovations. We will present the results of a comprehensive Yearly User Review Survey that captures diverse perspectives from across the ecosystem, user experiences, valuable user insights, emerging needs, and priorities, followed by Live Interactive User Session moderated by the Copernicus Data Space Ecosystem team, inviting participants to join the discussion, as questions, and share their perspectives in a dynamic, collaborative setting as a vital part of shaping the CDSE direction.

Presentations and speakers:


Opening of the CDSE User Review Meeting 2025 by ESA


  • ESA/EC

Latest advancements in the Copernicus Data Space Ecosystem (CDSE)


  • Jurry de la Mar – T-Systems
  • Jan Musial – CloudFerro
  • Grega Milcinski – Sinergise

Results of the CDSE User Satisfaction Survey 2025


  • Dennis Clarijs – VITO Remote Sensing

The State of Earth Observation Platforms: Towards Data Fusion, AI-Readiness and Vertical Specialization


  • Aravind Ravichandran – TerraWatch Space
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Session: C.06.06 Global Digital Elevation Models and geometric reference data

One of the most important factors for the interoperability of any geospatial data is accurate co-registration. This session will be dedicated to recent developments and challenges around the availability, quality, and consistency of reference data required for accurate co-registration of remotely sensed imagery at global scale. It will provide a forum for the results of studies performed under CEOS-WGCV, EDAP, and other initiatives aiming at the quality assurance and harmonisation of DEMs, GCPs and reference orthoimagery used by different providers worldwide.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Generative Modelling of Terrain with Sentinel-2 and COP-DEM

Authors: Paul Borne--Pons, Mikolaj Czerkawski, Rosalie Martin, Romain Rouffet
Affiliations: ESA, Φ-lab, Adobe Research
Terrain modelling is a common and challenging task in the creative industry, particularly relevant in the domains such as video games and VFX (Visual Effects). It is a complex and time-consuming task, particularly when it involves large-scale landscapes. Large scenes are becoming more and more common with the current boom in popularity of open world games. The current state-of-the-art in terrain modeling relies mainly on procedural and simulation methods which tend not to scale well after a certain point (it is too computationally expensive or lacks realism) and most importantly, it most often fails to capture the variety of the landscape the world offers. The recent advances in generative machine learning, especially denoising diffusion models, have paved the way for tools that can learn and model the visual representations directly from the data. In this work, we leverage these advances and the availability of rich data of the Earth terrain in the Copernicus programme. Specifically, the representation of terrain is defined here as a 2.5D combination of the optical visible bands and a supporting channel containing elevation information. By abstracting the complexity of the underlying physical processes that interact to shape terrain the model can generate patterns and mutual dependencies between terrain features hence achieving perceptual realism. In this work, a generative diffusion model is trained on a global collection of Sentinel 2 Level-2A data and Copernicus DEM. To provide a source of AI-ready training data, an expansion dataset with cropped and reprojected global COP-DEM 30m data has been built and released openly for free on the HuggingFace platform (1,837,843 of images in total), formatted as an expansion dataset of Major TOM (an open AI-ready dataset project conceived in ESA Φ-lab). Subsequently, a set of text captions was obtained for each of the Sentinel-2+DEM pairs. During training the model learnt from a text to recreate the pair of DEM and Sentinel 2 image from the corresponding text description. The generative model was then trained to generate new Sentinel-2 and COP-DEM pairs based on text captions. The preliminary results demonstrate a high quality of generated new data, which in general matches the user text prompt well. Further mechanisms for quantitative evaluation are currently being investigated. As a result, creative professionals, such as game designers, are now able to use the model to quickly prototype terrains or use the output of the model for further post processing. At the current stage, the model is primarily designed for creative applications, however, its potential benefit in the context of scientific applications is going to be explored in future work.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: TerraSAR-X and TanDEM-X Mission Overview and System Status Update

Authors: Allan Bojarski, Dr. Markus Bachmann, Johannes Böer, Christopher Wecklich, Dr Patrick Klenk, Dr Kersten Schmidt
Affiliations: German Aerospace Center (DLR) - Microwaves and Radar Institute
TerraSAR-X and its almost identical twin satellite TanDEM-X continue to acquire high resolution radar images and Digital Elevation Models with unprecedented accuracy far beyond their expected lifetime. Since launch the Synthetic Aperture Radar image quality and resolution have remained constant which is owed to a very stable instrument but also to an elaborate ground segment. As a result, the bistatic mission, which was designed for 5.5 years to generate a single global Digital Elevation Model, could be extended. In the following and still ongoing TanDEM-X 4D phase, constant updates of the global dataset are acquired showing in particular the three-dimensional elevation and terrain changes over time. This is performed especially in areas where significant changes are expected as for example in mountainous areas or on glaciers. In this way, the mission has generated a unique dataset by adding change layers to the existing global Digital Elevation Model showcasing natural and man-made topography transformations over the last decade. Furthermore, various timelines could be dedicated specifically to scientific acquisitions focusing on fast changing areas such as forest or permafrost regions and enabling the application of experimental modes such as concurrent imaging. Since both satellites are still in good condition, the ongoing missions will be continued providing continuous updates and extending this unique dataset as long as possible. In order to give an outlook on the remaining lifetime expectancy, this contribution will give a detailed overview of the current status of the satellites’ systems, the remaining onboard consumables and recent as well as upcoming operational challenges. We will present geometric calibration results using on-ground targets as well as antenna pattern measurements and baseline calibration datatakes to demonstrate the excellent radiometric stability of the system. Furthermore, since the battery is the fastest depleting resource, the focus on consumables will particularly be put on the progression of the battery ageing, the assessment of the battery capacity and the implication on the mission planning. For completeness, also the remaining fuel and the according lifetime limitation will be addressed. Additionally, recent events of external and internal nature will be discussed which temporarily affect the position determination and the synchronization of the satellites. In this context, solutions to overcome any correlated acquisitions constraints will be given. In summary the objectives of this contribution are to validate the good condition of both satellites after 14 and 17 years in orbit as well as to demonstrate the strategies and adaptions to extend their lifetime while maintaining image quality.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: The TanDEM-X 4D Phase – Input for a potential future update of the Copernicus DEM

Authors: Markus Bachmann, Maximilian Schandri, Thomas Kraus, Allan Bojarski, Johannes Böer, Manfred Zink
Affiliations: German Aerospace Center (DLR)
TanDEM-X Mission The TanDEM-X mission is the first bistatic SAR mission with two satellites [1]. It was realized by placing a second satellite (TDX) in close formation with TerraSAR-X (TSX) [2]. The primary mission goal was to deliver a global Digital Elevation Model (DEM) at a 12 m posting with a relative vertical accuracy of 2 m/4 m for terrain slopes less/steeper than 20%. The first global coverage was mainly acquired between 2010 and 2015 with outstanding height performance [3]. Copernicus DEM – relation to the TanDEM-X DEM The Copernicus DEM is a publicly available global digital elevation model with 30 m horizontal resolution. It was generated on basis of the TanDEM-X Global DEM. The bistatic TanDEM-X data acquired between 2010 and 2015 was processed into a global, consistent DEM dataset until 2016, called TanDEM-X Global DEM. It is available for scientific users via the TanDEM-X Science Service [4]. The DEM data set was then edited by Airbus Defense and Space GmbH and provided for commercial customers in form of the WorldDEM. Water bodies and cities were flattened in a largely manual editing process. In addition, data gaps in the DEM were filled using external reference DEMs [5]. Finally, this data was provided to ESA and published as the Copernicus DEM. TanDEM-X DEM 2020 (2017 – 2020) In the years 2015 until 2017 several additional scientific coverages of forest regions and the cryosphere as well as high resolution DEMs for demonstration purpose were generated. This was followed by a second global coverage between 2017 and 2020 . Using the experience from the preceding global coverages, the acquisitions were performed with an optimized acquisition strategy, respecting acquisition time constraints for areas with large seasonal variation, like in the Arctic region or over temperate and boreal forests [6]. As a result, the consistency of the data could be further improved and the need for reacquisitions due to seasonal effects was minimized. The acquisitions were processed and are publicly available in form of DEM Change Maps generated by DLR/IMF providing the height change with respect to the TanDEM-X Global DEM [7]. In additional a dataset similar to the global TanDEM-X DEM is currently generated by DLR/DFD as the “TanDEM-X DEM 2020” [8]. TanDEM-X 4D Phase (2021 – 2028) Currently, the TanDEM-X mission is in its TanDEM-X 4D phase. As the fourth dimension, the time aspect is incorporated by the regular monitoring of dedicated areas. For this purpose, regions with high scientific interest are acquired repeatedly in a bi-yearly alternating sequence. In each first year, the global forests and artic areas area acquired. During each second year, other areas with large height changes are in the focus of the satellites. The remaining areas, about one third of the Earth, is interspersed during the years in order to obtain a third global coverage until 2028 and allow a homogeneous utilization of the satellites. One main constraint in this phase is the advanced age of the two satellites. Consequently, the utilization of satellite resources like the propellant consumption or the battery were optimized in order to prolong the mission as long as possible. The final paper and the presentation will focus on these challenges and give an outlook on the future observation plan. References [1] G. Krieger et al. (2007) TanDEM-X: A Satellite Formation for High Resolution SAR Interferometry, TGRS, Vol.45, no.11. [2] M. Zink et al. (2021) TanDEM-X: 10 Years of Formation Flying Bistatic SAR Interferometry. JSTARS, DOI: 10.1109/JSTARS.2021.3062286 [3] P. Rizzoli et al. (2017) Generation and performance assessment of the global TanDEM-X digital elevation model. JPRS, DOI: 10.1016/j.isprsjprs.2017.08.008 [4] TanDEM-X Science Service https://tandemx-science.dlr.de/, last accessed 2024-12-02 [5] E. Fahrland, “Copernicus DEM Product Handbook,” 25. June 2020 [6] M. Bachmann et al. (2021) The TanDEM-X Mission Phases - Ten Years of Bistatic Acquisition and Formation Planning. JSTARS, DOI: 10.1109/JSTARS.2021.3065446 [7] M. Lachaise, et al. (2024) The TanDEM-X 30m DEM Change Maps: applications and further developments, EUSAR, Garching, Germany, 2024. [8] B. Wessel et al., (2022) The new TanDEM-X DEM 2020: generation and specifications, EUSAR, Leipzig, Germany, 2022.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: The impact of differences between Global Digital Elevation Models in geolocation with ESA EOCFI

Authors: Ludo Visser, Montserrat Pinol Sole, Michele Zundo
Affiliations: Akkodis for ESA/ESTEC, ESA/ESTEC
Global Digital Elevation Models (GDEMs) play an important role in geolocation in the processing of sensor data in Earth observation missions. The models provide elevation information and, sometimes, surface masks for a location, for given latitude and longitude. The ESA Earth Observation CFI software libraries (EOCFI), that are used in the data processing pipelines of many Earth observation missions, provide a standardized interface for querying various DEMs, including ACE-2, GETASSE and Copernicus global elevation models, at multiple resolutions. Together with the routines for spacecraft state propagation (on-orbit position, velocity and attitude) and instrument pointing, the EOCFI libraries provide a complete toolbox for geometry and geolocation in the data processors. However, its versatility in handling the various available elevation models naturally leads to the consideration if all models are equally suitable and, if not, which model should then be used. To aid in answering these questions we explore a few aspects of the available elevation models in relation to the geolocation routines in EOCFI, such as reference (geoid versus ellipsoid) and resolution. The handling of “water surfaces” is also discussed: the presence of surface (land/sea) mask, the presence of bathymetric data and the interpretation of “sea level” water pixels near and far away from the coast. The discussion points are illustrated by examples. These will show the impact of elevation model characteristics on geolocation, and how “similar” elevation models (e.g., models with the same spatial resolution) can lead to very different geolocation results. The impact of spacecraft attitude and instrument pointing is also considered. The purpose of the discussion is not only to create awareness about the fundamental differences between the available DEMs, but more importantly to help develop an understanding about how these differences affect geolocation in EOCFI. This will hopefully aid in choosing a suitable elevation model for a particular processor, considering mission design and observation objectives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Sentinel 2 GRI : Guidelines for Optimal Use and Outcomes of Study for a New Version

Authors: Emmanuel Hillairet, Sebastien Clerc, Antoine Burie, Simon Rommelaere, Seetha Palla, Silvia Enache, Rosalinda Morrone, Valentina Boccia
Affiliations: Cs Group, ACRI-ST, STARION Group, ESA/ESRIN
Due to their worldwide regular acquisitions and accurate geolocation (<5m CE90) Sentinel2 satellites are providing interesting references for other missions. MicroCarb and TRISHNA, institutional missions involving the French Agency (CNES), are using Sentinel2 acquisitions as geometric and radiometric references. New Space missions (for instance Copernicus Contributing Missions: CCM) are also using or intend to use Sentinel2 as references. On behalf of ESA, the OPT-MPC (Optical Mission Performance Cluster) team is publicly providing the Copernicus Sentinel 2 Global Reference Image, (GRI, via https://sentinels.copernicus.eu/web/sentinel/global-reference-image, and soon via Copernicus Data Space Ecosystem : CDSE). These mono-spectral (red band) databases of L1C multi-layer tiles and GCPs correspond to acquisitions in the early stages of the mission (2015-2017). Even if this current GRI is considered of great interest due to its worldwide extend and accurate geolocation, some limitations appear: inequal GCP density, some temporally instable GCPs, non-negligible cloud cover, landscape changes since its generation) and a new GRI is under study, aiming at taking advantage of the full available archive of Sentinel2 acquisitions, and more restricted rules to extract the GCPs. The proposed presentation will address the following topics: - First, introduction to the current GRI: how it was built, what are the geolocation performances, what are the advantages and limitations of use. - Then, proposition of guidelines for an optimize use: depending on the context (sensor bands, resolution and swath), whether use of GCPs, or tiles or even the use of a more adapted product in the archive (more recent, adapted spectral band). - Finally, presentation of the outcomes of the study dedicated to the generation of a new up-to-date GRI: statistics provided by the screening of the full archive, ways to tackle remaining cloudy areas, methods to extract and validate the GCPs from the selected tiles. This presentation will be also the opportunity to initiatet discussions with users or potential users of S2 acquisitions as references to their mission, to exchange experience and feedback, in the objective of a constant improvement of the proposed Global Reference Image.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.34)

Presentation: Towards a multi-source and multi-scale DTM/DSM

Authors: Dr. Serge Riazanoff, Mr. Kévin Gross, Mr. Axel Corseaux
Affiliations: VisioTerra
During the last decade, numerous Medium Resolution (MR) global Digital Elevation Models (DEM) and Very High Resolution (VHR) DEMs of cities, regions and countries of the world have been released publicly, with very permissive licenses. In the framework of the Earthnet Data Assessment Project (EDAP) and the DEM Intercomparison eXercise (DEMIX), local VHR DEMs (1 to 2.5 m of resolution) have been used as reference data to assess the quality of well-known global DEMs (1 arcsecond, approximatively 30 m of resolution at equator). These assessments underlined the various criteria of interest for DEM users, from very generic metrics (horizontal and vertical accuracies as RMSE or linear/circular error) to the thematic ones (accurate depiction of coasts, crests, buildings, vegetation…). Nowadays, these user requirements cannot be fulfilled by a static DEM, as they depend on the scale, features and possibly time of interest. The profusion of public DEMs offers an opportunity to create a multi-source and multi-scale DEM fitting a wide variety of users’ needs. While promising, this idea raises several technical challenges, such as the collection of input elevation data, the geometric transformations to international datums, the resampling methods, the generation of Digital Terrain Models (DTM) from Digital Surface Models (DSM) or even cross-border merging algorithms. Through DEMIX and EDAP, VisioTerra has performed assessments regarding the geometry and transformations of local VHR DEMs, as well as the impact of resampling methods on DEM products. Additionally, VisioTerra has developed the DEMIX Operations Platform, allowing to retrieve a selection of DEMs with custom export parameters such as the Ground Sampling Distance (GSD), Coordinate Reference System (CRS), Vertical Reference System (VRS), resampling methods (interpolation or aggregation) and pixel-type (point or area). The experience gained from these studies and tools, along with ongoing and future works about DEM merging and surface to bare-earth conversion algorithms, could pave the way to the creation of a multi-scale and multi-source DSM / DTM.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Session: D.03.04 Innovative technologies, tools and strategies for scientific visualisation and outreach

In recent years there has been an advancement in the field of communicating science-based information and facts to the citizens through open access and open source web based solutions and mobile applications. In earth observation, these solutions use innovative ways of presenting eo-based indicators and data, often coupled with storytelling elements to increase accessibility and outreach. Additionally, such innovations, coupled with data access and computation on cloud-based EO platforms, are very effective tools for scientific data dissemination as well as for education in EO and Earth Science. In this session we welcome contributions from innovative web-based solutions, dashboards, advanced visualisation tools and other new and innovative technologies and use cases for scientific communication, dissemination and education. In particular, we seek to explore how such solutions help to increase the impact of science, create and grow communities and stimulate adoption of EO. We also look towards the future, exploring trends and opportunities to connect with non-EO communities and adopt new technologies (e.g. immersive tools, AR, VR, gaming engines, etc.). The session will be an opportunity to exchange experiences and lessons, and explore opportunities to further collaborate.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Biodiversity and Climate Change: Coral Reef Visualization Using Immersive Digital Twins (VR)

Authors: Rainer Ressl, Thomas Heege, Knut Hartmann, Eva Haas, Genghis Borbolla, Veronica Aguilar, José Davila, Raúl Jimenez
Affiliations: EOMAP GmbH, National Comission for the Knowledge and Use of Biodiversity - CONABIO
Immersive virtual reality (VR) has proven to be an effective tool for enhancing understanding by creating a sense of spatial presence, where users feel as “being there” in a virtual environment. This capability makes VR particularly valuable for visualizing geospatial phenomena. However, much of the existing research on immersion and presence occurs in laboratory conditions, while studies focusing on virtual representations of real-world environments remain limited. Integrating VR with data from real locations with true geometries opens new opportunities for scientific research, public engagement, and environmental conservation. VR visualization offers not only realistic and immersive experiences but also enables effective monitoring of fragile ecosystems by combining data from Earth Observation (EO) satellites and biodiversity surveys. Such integration provides a platform for presenting complex scientific findings in accessible and visually engaging formats, making them understandable to broader audiences. This fosters environmental awareness and supports decision-making for conservation efforts. Our prototype demonstrates the potential of immersive VR by visualizing the biodiversity of a tropical coral reef of the Mexican Caribbean coast. Using precise 3D models derived from geospatial data, we recreate the geometry and habitats of the reef with unparalleled detail. This combination of EO-derived baseline information, such as bathymetry and benthic habitat maps, with in-situ biodiversity data allows users to explore coral reefs in a highly realistic way. The result is an authentic VR environment that provides a deeper understanding of these ecosystems. Future enhancements will incorporate data from Sentinel-3 on sea surface temperatures, enabling users to explore the impacts of climate change scenarios, such as coral bleaching events, on species and habitats. By visualizing these dynamics, users gain a clearer understanding of how climate stressors affect coral reefs and their associated biodiversity. The immersive VR experience goes beyond static visualization by allowing users to interact with ecosystems under different environmental conditions or future scenarios. This interaction educates users about the significance of these ecosystems, builds empathy, and inspires a sense of stewardship. It also promotes sustainable tourism emphasizing the importance of conserving natural resources. By highlighting the ecological value of these ecosystems, our immersive VR prototype contributes to both public awareness and environmental protection policies. While the tropical coral reef serves as an initial showcase, the methodology can be applied to other fragile ecosystems, such as mangroves or other coastal ecosystems. This flexibility ensures broad applicability for future developments in ecosystem monitoring and conservation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Xcube UI: The Next-Generation Interactive Visualization & Communication Platform

Authors: Norman Fomferra, Yogesh Kumar Baljeet Singh, Gunnar Brandt
Affiliations: Brockmann Consult
Effective communication and dissemination of Earth Observation (EO) data in the digital era requires appealing, intuitive, and interactive visualisation tools. Numerous toolkits, applications, and services exist to manage and visualise the vast volumes of EO data and other sources. These range from simple plotting libraries used during the development phase to sophisticated online viewers, dashboards, and cloud-based visualisation services. However, creating and disseminating impactful, tailored visualizations of raw EO data paired with interactive analysis and processing features often requires multiple specialized tools and substantial expertise in diverse techniques. As a consequence, technical support from external experts or the usage of third-party services is often required for typical research users, which makes the entire process cumbersome, costly, and time-consuming. Consequently, many research findings remain insufficiently visualised, impairing their impact. To address this issue, we extended the xcube ecosystem by xcube UI, an open source user-interface framework that enables Python-literate researchers to easily create customized online viewers, dashboards, and even toolboxes. The xcube ecosystem facilitates data access and processing of EO data, e.g., within Jupyter notebooks, and the new visualisation framework makes developing and displaying tailored visualisation apps as straightforward as using common plotting libraries. Researchers can leverage xcube datastores for convenient data access, even from remote cloud sources, resulting in xarray datasets that can be further processed and eventually be handed over to the xcube server, which is the back-end for any xcube visualisation app. This setup enables interactive visualisations with just two lines of code. In a JupyterLab, the app can be either started and operated in full-screen mode in a separate browser tab or embedded inline in a notebook. The xcube UI framework provides a complete EO data viewing and analysing application out of the box, which offers an interactive map alongside configurable charts, featuring numerous tools such as layer and colourmap management, animations, feature data integration, on-the-fly computation of user-defined variables, and split screen. Additionally, users can tailor the app by adding and configuring extra panel components that could comprise of any chart(s) of the powerful Vega Altair library combined with Material UI components, with the data of their choice, thus enabling the creation of high-quality, customizable diagrams.Other charting, visualization, or UI component libraries can be easily supported. This flexibility allows users to prototype individualized visualisation apps on their own by creating individual configurations, thus eliminating the need for external assistance. These tailored visualisation apps can subsequently be deployed publicly without further adaptations, either on the user's own infrastructure or through platforms that offer hosting services, such as DeepESDL. In our presentation, we will showcase an end-to-end workflow for publishing data cubes with EO data, alongside several examples that highlight the adaptability of the xcube framework and the advantages of its powerful yet user-friendly approach to develop interactive, tailored web applications for data visualization and dashboards. The xcube UI framework's open licensing, seamless integration into Python's data science ecosystem, and compatibility with Jupyter or as a standalone web application greatly facilitates the dissemination of EO data and research results to a wider audience. By enabling researchers to independently create and deploy interactive visualisations and toolboxes, the framework supports the democratization of research and aligns with the European Space Agency's ambition to foster open science and boost collaborative innovation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: An Interactive Scientific Visualization Toolkit for Earth Observation Datasets

{tag_str}

Authors: Lazaro Alonso, Jeran Poehls, Nuno Carvalhais, Markus Reichstein
Affiliations: Max-Planck Institute for Biogeochemistry
Visualizing and analyzing data is critical for identifying patterns, understanding impacts, and making informed predictions. Without intuitive tools, extracting meaningful insights becomes challenging, diminishing the value of collected information. FireSight[1], an open-source prototype developed within the Seasfire[2] project, addresses these challenges by offering a data-driven visual approach to fire analysis and prediction. Other tools, such as LexCube[3], which focuses on visualizing 2D textures in a 3D space, or the Initiative Carbon Plan[4], which specializes in 2D maps from Zarr stores, provide additional methods for interacting with spatial data. While these tools excel in specific areas, FireSight's comprehensive visualization enhances multidimensional analysis, enabling users to derive deeper insights from complex datasets. The toolkit leverages advanced web technologies to deliver interactive and visually compelling 3D volumetric renders. Its design allows users to easily customize the interface by integrating modern user interface (UI) components. The platform provides intuitive browser experiences powered by React, OpenGL Shading Language, and ThreeJS. Through a web-based interface, users can interactively select variables from different data stores, dynamically explore data in 2D and 3D where applicable, and calculate relationships between variables. A key objective is to enhance the visualization of observational data and modeling outputs, supporting the interpretation and communication of results. The visualization toolkit offers several key features: (1) users can dynamically explore data, selecting any variable and viewing it in both 2D and 3D when a time dimension is available. (2) Relationships between variables can be calculated, enhancing analytical capabilities for deeper data insights. (3) The tool supports the visualization of various Earth observation datasets, which can serve as inputs for modeling frameworks, ensuring flexibility in data exploration. (4) Finally, the code base is released on GitHub as open-source FireSight, with detailed instructions for installation and operation. Currently, plotting is restricted to the entire spatial extents of datasets, requiring a local dataset for streaming information. However, the chunking method of the Zarr data format offers potential for cloud-based EO platforms to enable pixel-level exploration. This capability would facilitate the visualization of complex modeling outputs without excessive data transfer. Aligned with Open Science principles, FireSight development incorporates community-driven libraries such as React, ThreeJS, and Zarr.js, while actively contributing to repositories like tweekpane-react[5]. The platform emphasizes modularity to ensure adaptability for future EO applications and interdisciplinary outreach. This presentation will explore the platform's design philosophy, technical implementations, and future expansion plans, including the integration of pyramid data schemes for high-resolution datasets. These advancements pave the way for next-generation scientific data exploration. By fostering open innovation, FireSight aims to bridge the gap between Earth Observation researchers, educators, and non-specialist communities, amplifying the impact of scientific endeavors and encouraging cross-disciplinary collaborations. [1] https://github.com/EarthyScience/FireSight [2] https://seasfire.hua.gr/ [3] https://www.lexcube.org/ [4] https://carbonplan.org/blog/maps-library-release [5] https://github.com/MelonCode/react-tweakpane/pull/3
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Lexcube: A Multi-Platform Ecosystem for Interactive Data Cube Visualization and Exploration

Authors: Maximilian Söchting, Prof. Dr. Gerik Scheuermann, David Montero, Miguel D. Mahecha
Affiliations: Remote Sensing Centre for Earth System Research (RSC4Earth), Leipzig University, Institute for Earth System Research and Remote Sensing, Leipzig University, Image and Signal Processing Group, Leipzig University, ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence), German Centre for Integrative Biodiversity Research (iDiv)
In Earth observation (EO) and modeling, the exploration and understanding of large-scale, multi-dimensional data remains a significant challenge. Data visualization and interactive exploration tools are crucial for enabling data interpretation, model development and scientific workflows, but are faced with technological challenges such as increasing data set sizes and resolutions, being complex to use or not being well-integrated into existing scientific workflows. We present Lexcube, an ecosystem of tools designed to bridge the gap between complex Earth observation data sets and intuitive visual exploration through interactive 3D data cube visualization approaches. The Lexcube ecosystem consists of five main components: (1) a public web-based platform (lexcube.org), (2) an open-source Python-based Jupyter notebook plugin, (3) a customized data cube visualization interface suitable for museum exhibits, (4) a physical interactive touch cube and (5) data cube paper craft templates as visualization outputs. The web platform (1) provides immediate access to preset datasets, enabling users to explore and understand high-resolution Earth observation data through the interactive 3D data cube visualization, even on smartphones and tablets. The Jupyter plugin (2) extends these capabilities to any gridded dataset compatible with Xarray or NumPy, allowing researchers to seamlessly employ interactive visualizations in their existing workflows in Jupyter notebooks. Utilizing a customized data cube visualization interface with explainer texts and a simplified user experience (3), Lexcube has been successfully exhibited at various German institutions and demonstrated its capability for science communication, allowing visitors to interactively explore local or global EO data. Combining the tactility of physical interaction with the existing visualization capabilities, we conceptualized and built an interactive touch cube (4) that brings a fully interactive data cube into the real world, transforming the digital visualization into a tangible experience. Similarly, any Lexcube deployment allows to export the current data cube visualization as a paper craft template (5), enabling users to create manifested physical representations of their data cubes and subselections, providing an engaging method for science communication and education. The ecosystem shares a common Lexcube application core which enables consistent functionality and user experience across all platforms. This approach facilitates the deployment of new features and improvements across the ecosystem. Current developments focus on implementing 3D volume visualization capabilities, specifically designed for exploring extreme events within datasets. This enhancement will allow users to "look inside" their data cubes, providing new insights into the spatial and temporal characteristics of extreme events across all Lexcube deployments. In this contribution, we demonstrate how the Lexcube ecosystem advances data visualization in Earth observation, scientific workflows and outreach by providing multiple, complementary approaches to data exploration. We showcase how the integration of digital and physical interactions enhances data understanding and communication across different user groups, from researchers to the general public.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Innovative discovery and analysis tools for multisensor exploitation

Authors: Lucile Gaultier, Fabrice Collard, Dr. Craig Donlon, Ziad El Khoury Hanna, Sylvain Herlédan, Guillaume Le Seach
Affiliations: Oceandatalab, ESA
Nowadays, a wide variety of observations from different sensors at different processing levels as well as in-situ observations and models provide us with an estimation of ocean geophysical variables at different scales from submesocale (few hundred meters) to large scale (hundred of kilometers). For instance, the Sentinel 1-2-3-6 program encompasses sensors like SAR, Ocean Color, Temperature brightness, or altimeter, each with an individual long revisit time but a rapid revisit from a constellation perspective.Exploiting the synergy of these various sources is essential to our understanding of the ocean dynamics while being aware of their respective potential and limitations. Despite the wealth of data, discovering, collocating, and analyzing a heterogeneous dataset can be challenging and act as a barrier for potential users wishing to leverage Earth Observation (EO) data. Accessing low-level data and preparing them for analysis requires a diverse set of skills. Addressing this challenge, the Ocean Virtual Laboratory Next Generation (OVL-NG) project has developed two tools, which will be introduced. Firstly, online data visualization websites, such as https://ovl.oceandatalab.com, have been made publicly accessible. The OVL portal empower users to explore various satellite, in-situ, and model data with just a few clicks. Users can navigate through time and space, easily compare hundreds of products (some in Near Real-Time), and utilize drawing and annotation features. The OVL web portal also facilitates sharing interesting cases with fellow scientists and communicating about captivating oceanic structures using SEAShot embedded tool. Secondly, a complementary tool named SEAScope offers additional features for analyzing pre-processed data and user-generated data. SEAScope is a free and open-source standalone application compatible with Windows, Linux, and macOS. It allows users to collocate data in time and space, rendering them on a 3D globe. Users can adjust rendering settings on the fly, extract data over a specific area or transect, and interface with external applications like Jupyter notebooks. This functionality enables users to extract data on a shared grid, analyze them, and import the results back into SEAScope for visualization alongside the input data. Join us at the ESA booth for live demonstrations of the OVL tools—explore and interact with Earth Observation data firsthand!
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall M1/M2)

Presentation: Earth Observation Science Storytelling with Dashboards

Authors: Dr Anca Anghelea, Manil Maskey, Naoko Sugita, Shinichi Sobue, Daniel Santillan
Affiliations: Esa, NASA, JAXA, EOX IT Services
The role of Earth Observation information in supporting societal decision making and action has expanded significantly in recent years. EO data is no longer an instrument reserved for scientists and experts, but has penetrated in several aspects of society and economy. Having easy and convenient access to open global EO data empowers and citizens and companies to look deeper into aspects related to their immediate environments, and make informed decisions about their lives and businesses. Yet in order to achieve real use of EO-based information we need to provide it in a form consumable by the end user - potentially higher level information, abstracted from the technicalities of dealing with the EO data itself - and through channels that are commonly accessed - such as mobile and web applications. Open data policies (e.g. Copernicus Programme) are making it possible to develop sophisticated tools that connect to these vast open data archives, analyse and process the data to transform it into meaningful information, package and represent this information through visualisations that are intuitive, and allow the users to configure what and how they want to consume this information. The ESA-NASA-JAXA EO Dashboard is one such solution that builds on informational and technology solutions developed by the three collaborating agencies, to bring to the general public an open and intuitive tool for exploring global changes with human activity, based on EO data. The EO Dashboard is accessible at https://eodashboard.org and has been developed and expanded over the past 4 years of the collaboration. The project started as an initiative to identify and communicate changes due to the covid-19 pandemic that were observable with EO data. In 2020, the impact of the information shown on the EO Dashboard through powerful visualisations (such as the air pollution decrease in major urban areas) was global. It demonstrated the impact of coupling EO Data with Web Technologies, through the use of interoperable standards and protocols, and delivered via suitable visualisation and exploration tools. The EO Dashboard has been evolved beyond the covid-19 to cover several other domains including: atmosphere, oceans, cryosphere, biomass, agriculture, economy, and the most recent one - extreme events. The tools has also expanded its visualisation and interactive capabilities, providing now not only an exploration interface but also scientific storytelling based on EO mission data, and a self-service tool for both scientists, journalists or citizens to create and publish their own findings using EO data, interactive maps, coupled with a variety of multimedia elements, and linked to applications developed on EO cloud platforms. In this presentation we will present and demonstrate the EO Dashboard and all its features and capabilities to support scientific communication and storytelling with EO Data, and will showcase some of the most compelling user-contributed as well as official tri-agency stories published on the platform. We will also present the plans for future features (such as integration of AI-based capabilities). Some example features can be explored at: - Explore Data interface: https://www.eodashboard.org/explore?indicator=GRDI1&x=962284.36551&y=5994420.53067&z=5.23679 - Thematic Storytelling - Oceans: https://www.eodashboard.org/oceans - User contributed storytelling - Extreme Events: https://www.eodashboard.org/story?id=hunga-tonga-aerosols
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Session: A.08.07 Ocean Health including marine and coastal biodiversity - PART 1

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: An exceptional phytoplankton bloom in the southeast Madagascar Sea driven by African dust deposition

Authors: John Gittings, Dr Giorgio Dall’Olmo, Dr Weiyi Tang, Joan Llort, Dr Fatma Jebri, Dr Eleni Livanou, Dr Francesco Nencioli, Dr Sofia Darmaraki, Mr Iason Theodorou, Dr Robert J. W. Brewin, Prof Meric Srokosz, Prof Nicolas Cassar, Prof Dionysios E. Raitsos
Affiliations: National and Kapodistrian University Of Athens, Sezione di Oceanografia, Istituto Nazionale di Oceanografia e Geofisica Sperimentale – OGS; Borgo Grotta Gigante, Trieste, 34010, Italy, Department of Geosciences, Princeton University; Guyot Hall, Princeton, NJ 08544, United States of America, Barcelona Supercomputing Center; Plaça d'Eusebi Güell, 1-3, Les Corts, 08034 Barcelona, Spain, National Oceanography Centre; Southampton, SO14 3ZH, United Kingdom, Collecte Localisation Satellites; 31520 Ramonville-Saint-Agne, France, Centre for Geography and Environmental Science, Department of Earth and Environmental Science, Faculty of Environment, Science and Economy; University of Exeter, Cornwall, United Kingdom, Division of Earth and Climate Sciences, Nicholas School of the Environment, Duke University; Durham, NC, United States of America
Rising surface temperatures are projected to cause more frequent and intense droughts in the world’s drylands. This can lead to land degradation, mobilization of soil particles, and an increase in dust aerosol emissions from arid and semi-arid regions. Dust aerosols are a key source of bio-essential nutrients, can be transported in the atmosphere over large distances, and ultimately deposited onto the ocean’s surface, alleviating nutrient limitation and increasing oceanic primary productivity. Currently, the linkages between desertification, dust emissions and ocean fertilization remain poorly understood. Here, we show that dust emitted from Southern Africa was transported and deposited into the nutrient-limited surface waters southeast of Madagascar, which stimulated the strongest phytoplankton bloom of the last two decades during a period of the year when blooms are not expected. The conditions required for triggering blooms of this magnitude are anomalous, but current trends in air temperatures, aridity, and dust emissions in Southern Africa suggest that such events could become more probable in the future. Together with the recent findings on ocean fertilization by drought-induced megafires in Australia, our results point towards a potential link between global warming, drought, aerosol emissions, and ocean blooms.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Retrieving Phytoplankton Functional Groups and Size Classes in in Optically Complex Waters Using Sentinel-2 and Sentinel-3 imagery

Authors: Tiit Kutser, Karolin Teeveer, Laura Argus, Birgot Paavel, Kaire Toming, Martin Ligi, Külli Kutser, Tuuli Soomets, Ele Vahtmäe
Affiliations: Estonian Marine Institute, University of Tartu
Knowing the phytoplankton functional groups (PFTs) and/or its size classes (PSCs) is important for multiple purposes e.g. for understanding the biological carbon pump, studying ecology of waterbodies and predicting possible changes that may happen due to climate change and other stressors. The models that are used to study the above mentioned problems require input data with large spatial coverage (often globally) and high temporal frequency. The latter is especially critical in inland and near-coastal waters where processes happen at hourly scales and spatial heterogeneity of PFTs and PSCs may be high. It is obvious that in situ data from ships/boats cannot provide data with sufficient spatial and temporal coverage. Remote sensing is the only tool that could provide sufficient spatial and temporal coverage of PFT and PSC data. And it has been used in the open ocean context where the main optically active water constituent is phytoplankton and other constituents (coloured dissolved organic matter - CDOM and suspended particulate matter - SPM) are phytoplankton degradation products. In inland and coastal waters most of CDOM and SPM originates from nearby land or is resuspended from the bottom. Thus, the optically active constituents concentrations vary independently from each other and in huge magnitude making interpretation of the remote sensing imagery (including retrieving PFTs and PSCs) extremely difficult. PFT retrieval from remote sensing reflectance is usually based on detecting pigments that are specific to certain phytoplankton group based on the absorption features each pigment causes in water reflectance spectra. PSC retrieval is based on the fact that particles with different size scatter light differently and change absolute value of water reflectance. This allows us to make assumptions on the size of particles in the water under investigation. There is also an alternative empirical method for retrieving PFTs from phytoplankton biomass (expressed as concentration of chlorophyll-a, Chl-a) using relationships between the Chl-a and relative biomass of some PFTs obtained from laboratory measurements. This method is used by the Copernicus Marine Service (CMEMS) where the relative biomasses of some PFTs are calculated from Chl-a product retrieved from Sentinel-3 and Sentinel-2 data. We have studied species composition of phytoplankton in more than 200 sampling stations in the Baltic Sea. The species composition was determined based on microscopy data. We measured water reflectance (Trios Ramses) and inherent optical water properties (WetLabs ac-s, eco-bb3, eco-vsf3, CTD) in each of the sampling stations and took water samples for determining Chl-a and other chlorophylls, CDOM, SPM and it’s organic and inorganic components SPIM and SPOM, as well as some carbon fraction (DOC, POC, TOC, DIC, PIC) data in recent years. Analysis of the published PFT-specific pigments shows that most of them absorb light at the same wavelengths. For example, CMEMS divides phytoplankton in seven PFTs. Four characteristic pigments (Zeaxanthin, Alloxanthin, B,e-carotene and B,B-carotene have nearly identical specific absorption spectra while Fucoxanthin absorption spectrum is very similar to the above four. It is practically impossible to separate these pigments from each other using hyperspectral sensors and pure pigments. Natural assemblages are mixtures of many phytoplankton species blurring the effect of these pigments on water reflectance. In the Baltic Sea (and in many lakes) water leaving signal is almost missing in the blue part of spectrum, where the above mentioned pigments absorb, due to high absorption by CDOM. Moreover, if to use Sentinel-3 OLCI or Sentinel-2 MSI sensors spectral resolution then the optical effect of these pigments becomes even less distinguishable. Thus, in optically complex waters like the Baltic Sea it is impossible to separate PFTs from each other using the group-specific pigments absorption features. We showed more than two decades ago that blooms of cyanobacteria are separable from blooms of other phytoplankton as the phycocyanin absorption feature is at 620 nm. On the other hand, we also showed that the biomass has to be very high (Chl-a concentration at least 8-9 mg/m3) even in the clearest parts of the Baltic Sea in order to make the make the phycocyanin absorption feature detectable by hyperspectral satellites. Our current results show that the absorption feature (i.e. phycocyanin) is often gone in the later stages of the bloom not allowing to separate cyanobacterial blooms from blooms of other phytoplankton. Moreover, there are other phytoplankton groups (e.g. Diatoms) that contain pigments like chlorophyll-c1 and c2 that absorb also light at 620 nm. Meaning that the 620 nm absorption feature in water reflectance is not specific to cyanobacteria only. We divided PSCs into three groups – picoplankton, nanoplankton and microplankton based on the microscopy results. At present we haven’t been able to find relationships between backscattering measured in situ and the dominating PSC. It is not surprising as most of our samples the inorganic component of SPM is larger or much larger than the organic component. Moreover, in coastal waters not all the SPOM is phytoplankton. There are many organic particles originating from land or from the sea bottom. Consequently, the backscattering of light in water is almost not related to the phytoplankton size and the amount of it as the amount is often negligible compared to mineral and other organic particles. We are in the process of validating the CMEMS PFT product for the Baltic Sea, but it is unlikely that it can provide reasonable results. First of all the laboratory relationships between relative biomass of different PFTs and Chl-a are not very strong (r2<0.3) and the correlation between CMEMS Chl-a product and in situ data is 0.24 according to their own validation document. Such weak relationships cannot provide a reliable product. As a result we may say that obtaining reliable PFT and PSC products for optically complex waters like the Baltic Sea is rather unlikely although we continue to explore the capabilities of different machine learning methods that should allow to find more sophisticated relationships between the water reflectance obtained by Sentinel-3 OLCI and Sentinel-2 MSI and the PFTs and PSCs.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Trends of phytoplankton community structure over the ocean colour satellite era, an inter-comparison perspective

Authors: Stéphane Doléac, Luther Ollier, Laurent Bopp, Roy El Hourany, Marina Lévy
Affiliations: Sorbonne Université, LOCEAN-IPSL, Ecole des Ponts, Université Littoral Côte d'Opale, LOG, Ecole Normale Supérieure, LMD-IPSL
Phytoplankton community structure plays a critical role in the natural carbon cycle and in the sustainability of marine ecosystems, making it central to the overall ocean health. This structure varies across time and space, influenced by environmental factors such as temperature and nutrient availability. While climate change is already affecting these factors, its long-term impact on phytoplankton community structure remains poorly understood. Since 1997, numerous algorithms have been developed to estimate from ocean colour remote sensing both total phytoplankton abundance and its structure. These tools have enabled 25 years of continuous, global-scale observations, providing invaluable insights into the impacts of climate change on phytoplankton. However, leveraging these datasets for long-term trend analysis remains challenging. Temporal inconsistencies and discontinuities in satellite time series, caused by sensor transitions or decommissioning, introduce biases that hinder the accurate detection of trends. In this study, we evaluate four distinct algorithms for retrieving, not only total chlorophyll, representative of the entire phytoplankton community, but also its partitioning into key phytoplankton groups. Temporal consistency is ensured by excluding regions with inconsistent observations since 1998 or by filling data gaps. Trends are then analyzed in an inter-comparison framework to identify robust patterns across products. Our results show that the assumptions underlying each algorithm significantly influence the detected trends, leading to substantial inter-product differences. Notably, we find that the assumption of a strong dependency of community structure on total phytoplankton abundance, at the basis of some algorithms, does not hold in others. Overall, little inter-product agreement is found, which questions our current ability to monitor these changes. These findings highlight the need for a better understanding of the drivers controlling the long-term evolution of phytoplankton community structure, in order to develop more robust algorithms. Developing such products would enhance our ability to reliably monitor and predict the impacts of climate change on phytoplankton and ocean health.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Coastal Phytoplankton Super Blooms At High Resolution: What Can We Learn From Space?

Authors: Anastasia Tarasenko, Pierre Gernez, Victor Pochic, Tristan Harmel
Affiliations: CNES, Nantes University, Magellium
Recent advances in Earth Observation have improved our ability to analyse phytoplankton blooms with remarkable detail. Phytoplankton, a vital component of marine ecosystems, can also pose serious threats to coastal zones when it develops as harmful algal bloom (HAB), creating hypoxia zones or releasing toxins. HABs can become very concentrated locally, to such an extent that the optical properties of the water are determined almost solely by one phytoplankton species – phenomena we will refer to as “super blooms”. Although conspicuous, these super blooms may cover a relatively small area and last just a few days. Together with the hydrodynamical complexity of coastal waters, it makes their detection extremely challenging for “classic” medium resolution satellite missions, as well as for in situ monitoring. Building upon a recent “optical bloom type” approach (Gernez et al., 2023), we used Sentinel-2’s high spatial resolution and advanced spectral capabilities to construct time series of super blooms, characterize their optical properties, and determine their environmental drivers. The French Atlantic coastal zone was used as the main study area: it is a region influenced by eutrophication where super blooms frequently occur, and for which phytoplankton taxonomic composition has been documented over the past decades thanks to in situ phytoplankton observation networks. Our research highlights the possibility of automatically detecting super blooms and identifying their dominant optical type using their reflectance signature. By adapting processing techniques specifically for turbid waters, we enhance the robustness of bloom detection and spectral characterization in complex environments influenced by river plumes. The advantage of hyperspectral over multispectral data are also analysed based on the precursor satellite missions PRISMA and EnMAP in preparation of further exploitation of operational hyperspectral missions (ESA CHIME, NASA SBG). We discuss the possibility of using additional parameters (surface temperature, currents, etc.) from other satellite missions in the complement of Sentinel-2 observations to better characterize the environmental forcings driving the spatio-temporal variability of super blooms. Of particular interest is the inclusion of the future high-resolution thermal TRISHNA mission to be launched in 2025 that will provide quasi-daily acquisitions over the study area. This multi-sensor approach is contemplated providing a deeper understanding of the relationships between phytoplankton blooms and environmental factors such as river inputs, water temperature, and ocean surface dynamics. This capability is crucial for implementing timely monitoring and response strategies, aiming at mitigating HABs impacts. References Gernez, P., Zoffoli, M.L., Lacour, T., Fariñas, T.H., Navarro, G., Caballero, I. and Harmel, T., 2023. The many shades of red tides: Sentinel-2 optical types of highly-concentrated harmful algal blooms. Remote Sensing of Environment, 287, p.113486.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.49/0.50)

Presentation: Life after launch: A snapshot of the first year of NASA’s PACE mission and its novel role in global and regional monitoring of ocean health

Authors: Jeremy Werdell, Brian Cairns, Antonio Mannino
Affiliations: NASA Goddard Space Flight Center, NASA Goddard Institude for Space Studies
The NASA Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) mission launched from Kennedy Space Center in the early morning of February 8, 2024. Just 63 days later, data from NASA’s newest Earth-observing satellite became available to the public. These data will extend and improve upon NASA’s 20+ years of global satellite observation of our living oceans, atmospheric aerosols, and cloud and initiate an advanced set of climate-relevant data records. Ultimately, PACE is the first mission to provide daily, global measurements that will enable prediction of the “boom-bust” cycle of fisheries, the appearance of harmful algae, and other factors that affect commercial and recreational industries. PACE also observes clouds and tiny airborne particles known as aerosols that influence air quality and absorb and reflect sunlight, thus warming and cooling the atmosphere. PACE’s primary instrument is a global spectrometer that spans the ultraviolet to near-infrared region in 2.5 nm steps and also includes seven discrete shortwave infrared bands from 940 to 2260 nm. This leap in technology will enable improved understanding of aquatic ecosystems and biogeochemistry on local, regional, and global scales, as well as provide new information on phytoplankton community composition and improved detection of algal blooms. The PACE payload is complemented by two small multi-angle polarimeters with spectral ranges that span the visible to near infrared spectral region, both of which will significantly improve aerosol and hydrosol characterizations and provide opportunities for novel ocean color atmospheric correction. In the months since launch and initial data release, the PACE Project released an advanced set of data products related to ocean health and phytoplankton community composition, conducted field campaigns to collect high volumes of related in situ information for both performance assessment and algorithm development activities, worked closely with the applied science and Earth Action communities to increase awareness and accessibility of its data, and explored synergies with other missions to broadly enhance the use of EO data for both research and applications. Here, we present a snapshot of these activities and their impacts and outcomes, encompassing the first year of the PACE mission.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 1

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Supporting Urban Heat Adaptation with Earth Observation

{tag_str}

Authors: Daro Krummrich, Adrian Fessel, Malte Rethwisch
Affiliations: OHB Digital Connect
Climate change has ubiquitous effects on the environment and on human life. While the increased frequency of extreme weather events or droughts have immediate and drastic consequences, the direct effect of rising ambient temperatures on humans is more subtle and affects demographics unequally. The direct influence of rising temperatures is most significant in cities and in environments that are heavily shaped by humans, partly because of lack of awareness of climate change and partly because planning and redesign processes did not consider those changes. A typical phenomenon seen in urban environments are urban heat islands, which manifest as microclimates affecting the surface and atmosphere above the urban space. They are indicated by average temperatures and thermal behavior that significantly exceeds that of the surrounding rural areas and can be attributed in part to the ubiquitous presences of artificial surface types suppressing natural soil function, regulatory functions of water bodies or vegetation and altered radiation budget. Further, the atmospheric modifications brought about by urban heat islands affect air quality and may even influence local weather patterns, such as rainfall. Mitigation of urban heat islands in principle can be achieved by altering urban planning to integrate more green spaces, water surfaces and to avoid certain man-made surface types. However, despite the intensity with which heat islands affect human life, redesign of existing urban environments is rarely a practical solution. Nevertheless, the need to act has been realized by administrators, leading to novel regulations foreseeing for instance the implementation of heat action plans which contain immediate measures during heat waves and guidelines for more sustainable future planning. In this presentation, we highlight the status and results from two complementary initiatives devised to support urban heat adaptation: First, we present the “Urban Heat Trend Monitor”, a GTIF capability striving to ease integration of satellite Earth observation data into adaptation strategies. Recognizing that spaceborne Earth observation cannot deliver thermal infrared data at spatial resolutions appropriate for urban spaces we introduce the thermal infrared sensor RAVEN as the second focus. RAVEN is a custom SWAPc-sensitive multiband sensor for airborne Land Surface Temperature retrieval in urban environments, which can help fill the gaps where spaceborne sensors struggle. In line with digitalization efforts across virtually all sectors, the efficiency and efficacy of adaptation measures can be supported via the provision of accessible and actionable information from spaceborne Earth observation, but also in conjunction with information from local sources including, for instance, demographic data or airborne acquisitions. This is one objective of the ESA GTIF (Green Transition Information Factories) driving the cloud integration, standardization, and commercialization of a diverse set of capabilities targeted at green transition venues, also including the domain of sustainable cities. Focusing on efforts in the scope of the ongoing “GTIF Kickstarters: Baltic” project, we present the development status of the “Urban Heat Trend Monitor”, a capability which exploits data from ESA’s Copernicus Sentinel 2 and 3 satellites to provide users with easy-to-interpret maps of urban climate information that can be integrated into administrative processes and to facilitate sustainable urban planning. Super-resolution imaging is used to enhance the resolution of the satellite imagery, allowing the analysis of heat islands and temperature fluctuations at the level of individual neighborhoods. Complementing streamlined access to raster data, the focus of the heat trend monitor is to enable users to extract, analyze and compare time series data for purposes such as the of comparison of regions, identification of problematic trends or the analysis of landcover changes. As an alternative to trend extraction in user-defined regions of interest or administrative boundaries, we propose a spatial partitioning method based on a superpixel approach to identify meaningful regions based on thermally homogeneous behavior. We approach time series analysis and trend identification using Generalized Additive Models, a data driven approach balancing predictive power and explainability. GTIF capabilities are developed in close cooperation with stakeholders to meet their, in the case of the Urban Heat Trend Monitor from the Baltic region and build on a technology stack aimed at interoperability and reusability. To this end, we adhere to standards including OpenEO, STAC and cloud-optimized storage formats like Zarr. Our second focus, RAVEN (“Remote Airborne Temperature and Emissivity Sensor”) was devised as an efficient solution to enable Land Surface Temperature retrieval at a scale appropriate for urban environments (resolution at typical operating altitudes 0.5-4 m). RAVEN employs a multi-band sensing and retrieval scheme typically reserved to spaceborne sensors and airborne demonstrator instruments yet implemented with relatively low-cost COTS hardware, enabling future use with unmanned airborne platforms. We report on the conceptualization and implementation of the sensor including geometric and radiometric calibration efforts, as well as on results from a 2024 airborne campaign conducted in Valencia in the scope of the Horizon2020 project CityCLIM and elaborate their relevance for urban adaptation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: BRIDGING GAP IN AIR POLLUTION HEALTH RISK ASSESSMENT: INTEGRATING EARTH OBSERVATION, MOBILITY DATA AND SPATIAL ANALYSIS FOR AIR POLLUTION HEALTH RISK ANALYSES

Authors: Lorenza Gilardi, Thilo Erbertseder, Dr. Frank Baier, Prof. Dr. Heiko Paeth, Prof. Dr. Tobias Ullmann, Prof. Dr. Hannes Taubenböck
Affiliations: German Aerospace Center, University of Würzburg
When performing a health risk assessment related to air pollution by means Earth Observation (EO) data two of the main challenges are: (1) the challenge of accurately assessing population exposure and (2) the inconsistent spatial and temporal coverage of air pollution data, especially in remote areas. Health risk is determined by the interaction of three components: hazard, exposure, and vulnerability. When investigating the health risk from outdoor air pollution, exposure at individual level is oftentimes of challenging quantification. Therefore, epidemiological studies are typically conducted using static residential data, neglecting the dynamic of human mobility and its impact on exposure. In recent years, the popularity of epidemiological studies exploiting an ecological approach is rising due to the increasing availability of publicly available of geospatial data. This approach brings several advantages like easy scalability, and the possibility to address cumulative exposures to multiple stressors. To support this approach, a case study was conducted to evaluate the long-term population exposure to PM2.5, NO2, and O3 in two European regions—Lombardy, Italy, and Germany—covering the period from 2013 to 2022 using the Copernicus Atmosphere Monitoring Service (CAMS) European Air Quality Reanalysis data. These datasets provide consistent and reliable estimates of pollutant concentrations, enabling a detailed evaluation of exposure by considering a static (residential) and dynamic (commuting habits included) population. The analysis integrated commuting data from national statistical institutes as well as the remote-sensing-derived global settlement mask, the World Settlement Footprint. The results highlight the significant disparities between exposure while considering a static and a dynamic population, emphasizing the importance of accounting for mobility in health risk assessments. Furthermore, the study demonstrates widespread exceedances of the World Health Organization’s updated air quality guidelines, particularly for PM2.5, and underscores the spatial variability in exposure levels. To further investigate these variations, the study proposes the use of spatial analysis techniques, particularly the Local Indicators of Spatial Association (LISA) to study the temporal evolution of air pollution hotspots and cold spots. By applying this method, the research aims to identify spatial clusters of pollutants such as particulate matter (PM2.5), nitrogen dioxide (NO2), and ozone (O3) as well as to produce multi-hazard maps of areas where hotspots of multiple pollutants converge. These analyses provide critical insights into regions with heightened health risks and inform strategies for mitigating exposure. As an exploratory approach, the LISA spatial analysis is extended to additional areas worldwide, exploring NO2 column data from satellite missions such as the Ozone Monitoring Instrument (OMI) and Sentinel-5P TROPOspheric Monitoring Instrument (TROPOMI) . These provide consistent, worldwide air quality data, offering opportunities to overcome the limitations of traditional ground-based monitoring networks.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Humanitarian costs of climate change: mapping the impact of climate-exacerbated monsoon floods on disruption to education and health using earth observation and data fusion

Authors: Dr Usman Nazir, Talha Quddoos, Dr Momin Uppal, Dr Sara Khalid, Dr Rochelle Schneider
Affiliations: Lahore University of Management Sciences, Centre for Statistics in Medicine, University of Oxford, ESA, London School of Hygiene & Tropical Medicine
# Background: In the immediate and short-term aftermath of climate-induced natural disasters, relief and rehabilitation efforts are a key priority. Rapid, reliable, and comprehensive information is required to access and help affected communities. We used earth observation and data fusion to measure the impact of the 2022 Pakistan floods on road access, health facilities and schools, and to independently validate previous estimates of impact on population displacement. # Methods: Satellite-detected flooding across Pakistan (Punjab, Sindh, Baluchistan, and KPK provinces) was independently estimated by data fusion of imagery acquired from SENTINEL-1, SENTINEL-2 and LANDSAT-9 satellites at spatial resolutions of 10, 25, and 40 meters, and validated against official UNOSAT estimates. In the absence of comprehensive official records, we geo-located schools and health facilities from Google Maps using POI template matching techniques combined with available Punjab Health Initiative Management Company (PHIMC) data. Geo-located population and road network data were provided by the global WorldPop and OSM datasets. The number (%) of flood-affected road networks, schools, health facilities, and local population displaced in the two-worst affected provinces (Sindh and Punjab) was estimated by mapping the geo-located data to satellite-derived flooding data. # Results: 230 (28.5%) and 122 (27%) basic health facilities were flooded and made inaccessible due to flooding of 1427 km and 596 km of road networks in the two worst-affected provinces of Sindh and Punjab, respectively. 839 (23.5%) schools were flooded in Sindh. Our model independently validated UNOSAT estimates confirming 10% (20 million) of the total population of Pakistan, 28% (13 million) population of Sindh and 4% population of Punjab to have been directly impacted by flooding between August and September 2022. # Interpretation: Earth observation can provide timely information for critical disaster management and rescue efforts. Disruption to schools, basic health facilities and road access as shown in this work may be measured in near real time with a view to aid immediate relief and longer-term resilience efforts, particularly in resource-limited settings. # Funding: NIHR Oxford Biomedical Research Centre Programme. Additionally we would like to acknowledge funding support provided by the Higher Education Commission of Pakistan through grant GCF-521. # Contributors: The study was conceived and designed by SK, UN, and MU. Data curation and analysis was performed by UN and MTQ, and interpreted by all co-authors. The abstract was written by UN and SK and revised by all co-authors. SK is responsible for the overall study. # Declaration of Interests: SK is supported by the Innovative Medicines initiative, Bill & Melinda Gates Foundation, Health Data Research UK, British Heart Foundation, and Medical Research Council and Natural Environment Research Council outside of this work.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: AeDES2.0 - An enhanced climate-and-health service for monitoring and forecasting environmental suitability of Aedes borne disease transmission

Authors: Javier Corvillo, Dr. Ángel Muñoz, Dr. Verónica Torralba, Dr. Alba Llabrés-Brustenga, Dr Ana Rivière Cinnamond
Affiliations: Barcelona Supercomputing Center, Pan American Health Organization
Aedes-borne diseases, such as dengue, Zika and Chikungunya, pose a grave threat to millions of people worldwide each year. Aware of potential compound effects regarding other important diseases, such as COVID-19, it has become imperative for health authorities to maintain a detailed surveillance of key environmental variables that can trigger epidemic episodes. While disease transmission is generally conditioned by multiple socioeconomic factors, the environmental suitability for vectors and viruses to proliferate is a necessary –although not sufficient– condition that needs to be closely monitored and forecasted. As such, a comprehensive service that allows stakeholders to analyze and visualize environmental suitability on affected hotspots is crucial for communities to better prepare in the case of present and future outbreaks. The newest version of the Aedes-borne Diseases Environmental Suitability (AeDES2) climate-and-health service is a next-generation, fully-operational monitoring system that reproduces and improves the previous version (Muñoz et al., 2020), broadening both the temporal and spatial scope while simultaneously enhancing both observational and forecasting quality. With AeDES2, users can consult the historical evolution of the environmental suitability values on any grid point of interest, as well as the expected future evolution up to three seasons in advance. Aside from the environmental suitability values, health authorities can additionally analyze the estimated incidence or percentage of population at risk threshold –a key indicator for governing bodies to trigger the implementation of control measures to reduce the spread of the disease in an affected population. AeDES2 incorporates four state-of-the-art environmental suitability models, considering both epidemiological factors for transmission probability and climate variables such as temperature values. On the monitoring side, AeDES2 provides a continuously updated monthly historical sequence of environmental suitability values by generating an ensemble with multiple observational references, hence providing uncertainty estimates in the monitoring system, an improvement over the previous version. On the prediction side, still under development, AeDES2 builds on its predecessor’s pattern-based multi-model calibration approach, assimilating new Machine Learning calibration methods such as neural networks, aiming to reliably reproduce key non-linear patterns that are used as predictors in the cross-validated forecast system.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: Inland Cholera Seasonality, North India: Role of Climate and Environmental Factors

Authors: Dr Neelam Taneja, Dr Arti Mishra, Dr Nisha Singh
Affiliations: PGIMER
One-third of India’s population lives under the threat of cholera. The region around Chandigarh is a hot spot for cholera and has experienced a resurgence from 2002 onwards. In the freshwater environment of north India, cholera appears seasonally in the form of clusters as well as sporadically, accounting for a significant piece of the puzzle of cholera epidemiology. Cholera cases occur during hot and humid months peaking with monsoons. This region does not exhibit the bi-annual cycle (pre- and post-monsoon) of coastal cholera due to distinct climatic factors and experiences a single peak only during the monsoon months. The ecology of Vibrio cholerae in the freshwater aquatic environs is poorly understood. We conducted a environmental and ecological surveillance in our region to understand the seasonality of cholera. The influence of environmental parameters, including abiotic factors (temperature, salinity, pH, rainfall) and biotic factors (phytoplanktons and zooplanktons) on the prevalence and isolation of V. cholerae was measured. The northern part of India has a dense network of many major rivers and several freshwater lakes. This region receives heavy rainfall during the monsoon months (July–October) and exhibits high temperatures >30 °C during the summer and rainy season (April–October). The winter months (November–March) exhibit temperatures below 20 °C and little rainfall. Clinical cholera cases coincided with elevated rainfall, chlorophyll concentration, and air temperature, whereas isolation of V. cholerae non-O1 non-O139 from water was dependent on temperature (p < 0.05) but was independent of rainfall and pH (p > 0.05). However, isolation from plankton samples correlated with increased temperature and pH (p < 0.05). A lag period of almost a month was observed between rising temperature and increased isolation of V. cholerae from the environment, which in succession was followed by an appearance of cholera cases in the community a month later. All the abiotic and biotic factors vary in this region with season, except salinity which was almost constant throughout the year. The isolation of V. cholerae non-O1 non-O139 varied in different seasons, with a peak during summers (69%) and monsoons (46.5%) and was minimal in winters (15.5%, p < 0.05). It was during this peak that V. cholerae O1 could be isolated from sewage and drinking water samples. With the onset of the rainfall, chances of a breach in sewage and contamination of drinking water supplies increase. On multivariate regression analysis, rainfall was found to be an independent predictor for outbreaks of cholera, whereas elevated temperature had a significant effect only if combined with rainfall. Chlorophyll also exhibited a significant correlation with the occurrence of cholera outbreaks. The pH of water significantly increased with plankton blooms, although the actual changes were small. Nevertheless, there was a significant correlation between the plankton bloom, which governs pH changes, and an increase in isolation percentage for V. cholerae from planktons. We conclude that environmental parameters play a significant role in the emergence and spread of cholera and the abundance of V. cholerae in the environment. However, more detailed analyses of other climatic factors and genomic analysis are needed to understand the links between the enviornmental Vibrio cholerae and clinical cholera.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.94/0.95)

Presentation: EO4Health Resilience: Leveraging Earth Observation for Public Health Preparedness

Authors: Filipe Brandao, Joao Vitorino, Annamaria Conte, Carla Ippoliti, Luca Candeloro, Shubha Sathyendranath, Dhriti Sengupta, Gunnar Brandt, Tejas Morbagal Harish, Marcello Maranesi, Rafaelle Scarano, William Wint, Simone Calderara, Marco Marchetti, Stefano Ferretti
Affiliations: GMV, Istituto Zooprofilattico Sperimentale dell'Abruzzo e del Molise (IZSAM), Plymouth Marine Laboratory, Brockmann Consult, GMATICS, Environmental Research Group Oxford, UNIMORE, Albertitalia Foundation, European Space Agency
The EO4Health Resilience project, funded by the European Space Agency (ESA), aims to evaluate the suitability of Earth Observation (EO) imagery in supporting public health decision-making, scenario analysis, and impact and risk assessments. By addressing scientific gaps and challenges while aligning with the needs of key stakeholders, the project seeks to conceptualize a practical, long-term initiative that integrates EO technology into health resilience strategies. Building on extensive knowledge from previous ESA activities, EO4Health Resilience focuses on developing and implementing value-added services that use EO data and Artificial Intelligence to identify patterns for accurately predicting the spatio-temporal dynamics of vector-borne and water-borne diseases. The project addresses two thematic domains: Vector-Borne Diseases (VBD) and Water-Borne Diseases (WBD). The VBD servicescenter around implementing a model originally developed for Italy, which assesses the probability of West Nile Virus circulation under suitable conditions. In EO4Health Resilience, this model has been expanded to cover a broader Area of Interest, encompassing significant parts of North Africa and Europe. Its outputs are validated using ground truth data from official sources. Moreover, the consortium, in collaboration with ESA, is engaging stakeholders such as the UN Food and Agriculture Organization (FAO) to synergize with existing initiatives like the Rift Valley Fever Early Warning Decision Support Tool (RVF DST). In the WBD domain services focus on environmental risks associated with cholera, Escherichia coli, flooding in Vembanad Lake, and non-cholera Vibrio infections in the Baltic Sea region. These services enhance existing models with geospatial data, offering valuable insights often underutilized in public health risk assessments. The project also explores novel applications of very high-resolution imagery for both VBD and WBD themes. Preliminary results demonstrate the potential for spatially detailed insights, strengthening the links between environmental conditions and disease emergence. These advancements have been made possible through ESA’s support, including access to commercial satellite imagery that would otherwise be unattainable. A group of relevant advisors is following all activities to ensure the scientific and technical advancements are aligned with best practices. This group provides expertise across the domains of disease knowledge and advanced EO data processing, helping to tailor the project’s outputs to more effectively address the challenges of public health. On the engineering front, EO4Health Resilience is advancing public health capabilities through the "Resilience & Earth Observation Virtual Observatory." This web-based platform serves as a centralized hub for project activities. It provides access to EO and health-related data, supports the full implementation of VBD and WBD services, and integrates additional tools for analyzing patterns associated with emerging diseases. Designed to be user-friendly, the observatory enables even non-specialists in EO and geographic data to access and utilize critical information, promoting the wider uptake of project outputs. Through innovative EO applications, strategic partnerships, and user-centric tools, EO4Health Resilience is paving the way for a more resilient public health system equipped to anticipate and mitigate disease risks globally.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Session: A.02.05 Peatland

Peatlands cover only 3 per cent of the world’s land, mainly in the boreal and tropical zone, but they store nearly 30% of terrestrial carbon and twice the carbon stored in forests. When drained and damaged they exacerbate climate change, emitting two Gt of CO2 every year, which accounts for almost 6% of all global greenhouse gas emissions. The unprecedented observations collected by the Copernicus Sentinel family and other sensors allow new ways to monitor and manage peatlands. Emphasis will be put on advances in improved mapping and monitoring of intact, degraded and cultivated peatlands for conservation, management and restoration in a global and a specific climate zone (e.g. boreal, temperate, tropical) context. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Monitoring Tropical Peatland Hydrology With Spaceborne L-band SAR

Authors: Antje Uhde, Dr. Laura Hess, Dr. Alison Hoyt, Prof. Christiane Schmullius, Dr. Euridice N. Honorio Coronado, Edmundo Mendoza, Dr. Gerardo Flores Llampazo, Prof. Timothy Baker, Prof. Susan Trumbore, Dr. Scott Winton
Affiliations: Department of Biogeochemical Processes, Max Planck Institute For Biogeochemistry, Department of Geography, Friedrich Schiller University, Earth Research Institute, University of California Santa Barbara, Department of Earth System Science, Stanford University, Royal Botanic Gardens, Kew, Department of Environmental Studies, University of California Santa, Instituto de Investigaciones de la Amazonía Peruana, School of Geography, University of Leeds
Globally, peatlands store large amounts of carbon (C), but the fate of said C is highly uncertain. While many studies focus on high latitude peatlands, recent work shows that tropical peatlands store large amounts of C that can be vulnerable to rapid loss if hydrological conditions change. In tropical peatlands, water table levels drive greenhouse gas (GHG) emissions. During low water table conditions, C is emitted to the atmosphere through oxidation, while under high water and anaerobic conditions, methane (CH4) is produced. Knowledge of water table dynamics provides important information on processes regulating tropical peatland GHG dynamics necessary to assess the impact of global warming and climate extremes on tropical peatland GHG dynamics. However, very few field observations are available on water table levels in tropical lowland peatlands. In this study, we used time-series data of in-situ water table dynamics for 12 sites in the Pastaza-Maranón Foreland Basin in Peru (2018 – 2021) and 9 sites in the eastern lowland of Colombia (2023 – 2024). These were combined with information on ecosystem structure to model changes in tropical peatland above-ground water tables using L-band HH backscatter. We observed each PALSAR-2 orbit separately to account for varying incidence angles, which increases the total number of water level vs. backscatter time-series to 41. Using a kmeans clustering analysis we found two ecosystem types where water table changes correlate linearly with changes in PALSAR-2 ScanSAR L-band HH backscatter. The first cluster (1) consists of short-statured forest with a GEDI L2A relative height 95 (rh95) of 6.5 m – 12 m, a multi-temporal standard deviation of L-band HV backscatter > 0.42, combined with a multi-temporal mean NDVI < 0.82 (6 sites with a total of 10 time-series) The second cluster (2) is characterized by a GEDI rh95 of 21 m – 28 m, and a GEDI L2B foliage height diversity (fhd) index > 3 (7 sites with a total of 9 time-series). For sites outside these criteria we observed logarithmic, exponential or no correlation with PALSAR-2 HH backscatter. We next used a multiple linear regression model to predict the sensitivity of the L-band HH backscatter to changes in above-ground water table (i.e. the slope of the linear regression). In our preliminary model, the highest correlation coefficients were obtained for the GEDI L2B variables (total canopy cover, foliage height diversity) and the L-band HV multi-temporal mean backscatter and standard deviation. The PALSAR-2 incidence angle had a larger effect on the multiple linear regression for cluster 2 sites than cluster 1. We used a leave-one-out cross validation and obtained an average mean absolute error of up to 6, meaning we predicted the increase of water table per increase of 1 dB L-band HH backscatter with an average accuracy of ± 6 cm. With this model we can monitor changes in tropical peatland above-ground water table dynamics using freely available Earth observation data. In addition to spatial extrapolation beyond our measurement sites, we also want to test temporal extrapolation by applying a temporal cross-validation for sites with multiple years of water table data. This would allow us to assess the influence of past and future El-Nino Southern Oscillation (ENSO) extreme events, as well as global warming, on tropical peatland water table dynamics. Knowledge of seasonal water table dynamics, and changes in such, enables us to draw conclusions on changes in the GHG balance of tropical peatland ecosystems.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Multi-temporal Mapping of Peatland Species Abundance and Condition After Rewetting

Authors: Christina Hellmann, Dr. Bernd Bobertz, Enna Drege, Duc-Viet Nguyen, Dr Vu-Dong Pham, Malin Stephan, Ariane Tepaß, Dr. Marcel Schwieder, Dr Sebastian van der Linden
Affiliations: Institute of Geography and Geology, University of Greifswald, Partner in the Greifswald Mire Centre, Friedrich-Ludwig-Jahn-Str. 16, Thünen Institute of Farm Economics, Bundesallee 63
Peatlands are huge carbon sinks, but due to anthropogenic impact also massive sources of greenhouse gas emissions. Peatlands around the world have been drained, e.g., for agriculture in temperate Europe. Although they share only 0.5% of the global land surface, they cause 4% of total GHG emissions. In the north-eastern German state Mecklenburg-Western Pomerania their share amounts to almost 40% of the total emissions. To stop these emissions, peatlands need to be rewetted, but rewetting measures are not always successful. Therefore, monitoring concepts are required. The occurence of typical peatland vegetation gives an indication on abiotic environmental factors, especially hydrological conditions or nutrient supply. Further, phenological stages can be different within species at a given time, indicating, e.g., stress induced by low waterlevels or shifted phenological development following differences in micro-climate. Hyperspectral satellite missions, such as EnMAP, pose a good opportunity in monitoring rewetted peatlands. EnMAP imagery offer high spectral resolution for relatively large areas and multiple dates per year. The high spectral resolution and range promise a high information content regarding vegetation composition and state. We quantified the abundance and condition of typical peatland genera or species for drained and rewetted peatlands in the Peene and Trebel Valley in Mecklenburg-Western Pomerania. Multi-temporal hyperspectral EnMAP imagery from two years (June and August 2023, May and September 2024) were used in a two-step approach, both with regression based unmixing and synthetic training data. (1) We derived the abundance of Phragmites australis, Phalaris arundinacea, Typha spp., Carex spp., and other wetland vegetation, including, e.g., Juncus effusus, Glyceria maxima, Iris spp., and Agrostis stolonifera, within the 30-m pixels for each point in time. (2) We disentangled the pixel-wise fraction of green vegetation (GV), NPV and Water for each point in time. By combining the two products we can compare species-wise GV-NPV fractions in time. The abundance of GV and NPV at a given time, both give an indication on the phenological stages. These vary between species, but also within species. Phenological differences within species highlight small scale differences in abiotic environmental factors, that may indicate stress from insufficient rewetting or a successful rewetting. This way, the multi-date hyperspectral data complements research on peatland management with essential information on characteristic vegetation. Our results show that our suggested approach, i.e., the unmixing of multi-temporal hyperspectral satellite data supports ongoing peatland research. Mapping and monitoring of peatland vegetation and rewetting processes benefit from the increasing availability of multi-temporal hyperspectral satellite data. Future upcoming hyperspectral satellite missions such as CHIME with improved spatial and temporal coverage will make it possible to extend the approach for monitoring rewetted peatlands to large areas.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Return to origins: restored peatlands align with intact peatlands in satellite-derived albedo and land surface temperature over time, but not in vegetation properties

Authors: Iuliia Burdun, Mari Myllymäki, Rebekka R.E. Artz, Mélina Guêné-Nanchen, Leonas Jarašius, Ain Kull, Erik A. Lilleskov, Kevin McCullough, Mara Pakalne, Jiabin Pu, Jurate Sendzikaite, Liga Strazdina, Miina Rautiainen
Affiliations: School of Engineering, Aalto University, Natural Resources Institute Finland (Luke), Ecological Sciences, James Hutton Institute, Department of Plant Sciences, Peatland Ecology Research Group (PERG) and Centre for Northern Studies (CEN), Université Laval, Foundation for Peatlands Restoration and Conservation, University of Tartu, Institute of Ecology and Earth Sciences, USDA Forest Service, Northern Research Station, USDA Forest Service, Northern Research Station, University of Latvia, Botanical Garden, Department of Earth and Environment, Boston University
Restoring degraded peatlands presents a powerful opportunity for climate change mitigation. As a result, global initiatives to restore peatlands have been showing significant growth, especially in northern regions where degradation is most extensive. To ensure the success of these restoration efforts, continuous and comprehensive spatial monitoring is crucial. Remote sensing offers a powerful tool for enabling this type of monitoring, providing consistent, large-scale data across regions. Capturing essential climate variables over time allows us to track restoration progress with precision and continuity. In our work, we aimed to uncover restoration-induced changes in essential climate variables of degraded northern peatlands. We hypothesized that, prior to restoration, degraded peatlands with different initial land cover types display more pronounced differences compared to intact peatlands, but these differences diminish as restoration progresses. Utilizing over two decades of satellite data, we analyzed climate variables to track changes in restored peatlands, evaluating whether they are progressing toward their original, natural conditions in Finland, Estonia, Latvia, Lithuania, the United Kingdom, Canada, and the United States of America. By leveraging a long-term dataset across a wide geographical range of degraded northern peatlands, encompassing four distinct pre-restoration land cover types, we observed significant restoration-driven changes. Overall, we found that restored peatlands tended to resemble intact peatlands more closely after a decade following restoration. Our findings highlighted diverse and complex restoration-induced changes in satellite-derived observations. Restoration impacts were particularly notable in vegetation cover, surface temperature, and albedo, with the latter two showing the strongest indications of peatlands gradually recovering their natural state over time. Such changes have the potential to impact local and regional climate dynamics, especially in areas where large-scale restoration efforts are underway. With the increasing number of restored peatlands, particularly in Europe and Northern America, it becomes essential to incorporate these factors into evaluations of the climatic effects of land-use change.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Large Scale Assessment of Fire Impacts on Siberian Peatlands Carbon Through High-Resolution Datasets

Authors: Philippe Ciais, Filipe Aires, Clement J. F. Delcourt, Thu-Hang Nguyen, Emilio Chuvieco, Sander Veraverbeke, Chunjing Qiu, Amin Khairoun
Affiliations: Universidad de Alcalá, Environmental Remote Sensing Research Group, Department of Geology, Geography and the Environment, Laboratoire des Sciences du Climat et de l’Environnement, UMR 1572 CEA-CNRS-UVSQ, Université Paris-Saclay, Research Center for Global Change and Complex Ecosystems, School of Ecological and Environmental Sciences, East China Normal University, LERMA, CNRS/Observatoire de Paris/Sorbonne University, Faculty of Science, Vrije Universiteit Amsterdam
Peatlands are the world’s largest natural terrestrial carbon sink. Arctic fires represent one of the major agents responsible for carbon released from permafrost. Coarse-resolution Burned Area (BA) and emission datasets revealed that fire had largely affected carbon-rich peatlands in the Siberian arctic region leading to striking belowground carbon emissions in recent years. However, accurate evaluations of the impacts of these fires on peatland carbon stocks using high-resolution data are lacking. In this work, we present a wall-to-wall assessment of fire impacts over the entire Siberian region expanding over around 9 Mkm2 (extent: 63°E-180°E; 60°N-74°N) for the period 2001-2023 using new high-resolution maps of BA and peatland cover. We analyse peat fire trends over time and their impacts on belowground carbon stocks and the drivers of spatio-temporal variability. We found that the yearly BA shows a large variability as it ranges from 0.48 Mha in the year 2015 to 10.58 Mha in 2021 with an average of 4.68 Mha and a coefficient of interannual variation reaching more than 57%. A significant increase was observed in the years 2019-2021 that was mainly linked to extremely anomalous dry summers. Our BA estimates were higher than coarse-resolution BA products (88.92 ± 24.81 and 62.76 ± 16.41% higher than MCD64A1 and FireCCI51, respectively) while the trends were similar. Notably, 2020 emerged as the most striking fire season for peatlands as a result of extensive fires in carbon-rich permafrost above the Arctic Circle. Overall, peat fires accounted for 33.96 ± 2.31% of total BA. Carbon emissions from fires and burn depth were modelled using a variety of predictors including, climate, soil, biomass and fire properties. Annual carbon emissions ranged from 6.12 to 158 Mt C, of which 69.25 ± 2.28% were attributed to belowground carbon emissions of burned peatlands, contrasting GFED4s and GFED5 fractions that doesn’t exceed 4.41 and 12.57%, respectively. A causal inference model revealed that drought and fire weather indicators control 58% of the interannual variability of peat fires occurrence in three distinct zones of Siberia (western, central and eastern). Additionally, the model accounts for 56% and 46% of the conditional variability (50% and 28% marginal variability) in belowground carbon emissions and the fraction of peat fires relative to total BA, respectively. This analysis highlights the critical role of fire in peatland degradation with improved certainty over previous studies.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: Earth Observation for Peatlands: An Integrated Framework for Validation of Peatland Properties

Authors: Harika Ankathi, Professor Kevin Tansey, Gerardo Lopez Saldana, Ian Jory, Yara Al Sarrouh, Susan Page, Michel Bechtold, Fred Worrall, Lisa Beccaro, Cristiano Tolomei, Stefano Salvi, Christian Bignami
Affiliations: University Of Leicester, Assimila Ltd, KU Leuven, Durham University, Istituto Nazionale di Geofisica e Vulcanologia
Storing over 600 gigatons of carbon – twice that held in all global forest biomass – peatlands are critical yet increasingly threatened ecosystems that demand urgent, sophisticated monitoring solutions. With significant areas facing degradation, their effective management is hampered by the lack of consistent, high-quality monitoring systems. The ESA WorldPeatland Project addresses this critical challenge by pioneering an integration of Earth Observation (EO) technologies to develop a standardized, global framework for comprehensive peatland assessment. Through systematic stakeholder engagement, our project identifies critical knowledge gaps and develops innovative solutions for mapping, monitoring, and assessing peatland conditions across diverse biomes, marking a significant advance in peatland science and conservation. Through extensive stakeholder engagement, we identified critical monitoring needs and ESA WorldPeatland Project generates a comprehensive suite of Earth Observation products to address them. Our foundational work involves peatland extent analysis, where we systematically compare and validate peat extent mapping using multi-source satellite data. Our SM_L4-Sentinel-1/2 water level dynamics product, delivered at 1km resolution, addresses the crucial stakeholder need for monitoring peatland hydrology and assessing restoration effectiveness. For tracking peatland degradation, we implement ground motion measurements using both E-PS and ISBAS techniques, enabling detailed subsidence monitoring for possible carbon loss assessments. Responding to requirements for vegetation and biodiversity monitoring, we generate bio-geophysical parameters including Leaf Area Index (LAI), Land Surface Temperature (LST), and specialized vegetation indices using data from MODIS, Sentinel-2, and Landsat satellites. These parameters provide essential information for assessing revegetation progress and habitat development. To address the stakeholder need for holistic peatland assessment, these individual products are synthesized into integrated health indicators that combine hydrological, ecological, and geophysical parameters. All products maintain consistent temporal resolutions from daily to monthly observations and spatial resolutions ranging from 10m to 1km, directly addressing user requirements for both broad-scale monitoring and detailed site analysis. Following stakeholder feedback emphasizing accessibility, these products are delivered through standardized, web-based applications designed for both technical and non-technical users. The ESA WorldPeatland Project focuses on advancing global peatland monitoring through systematic validation and intercomparison with existing datasets. Our validation framework encompasses comparison with ground measurements, high-resolution reference data, and other operational products. For initial assessments of peatland extent mapping, we adopted a systematic approach that integrates multi-source satellite datasets. We detail comparisons of Global Peatland Map (GPM), UKCEH, CORINE, Congo-Peat, ESA, MODIS, ESA CCI, and ESRI Land Cover datasets. Moving beyond peat extent for carbon stock estimation, we aim to integrate satellite-derived vegetation metrics with soil carbon models to quantify carbon storage and emissions. Peat depth mapping will utilize a combination of radar backscatter, LiDAR data, and field surveys to refine depth estimates across varying peatland types. To address fire risk and disturbance monitoring, we propose leveraging thermal anomaly datasets from MODIS and Sentinel-3 alongside vegetation dryness indices. Furthermore, methane flux modelling will be explored by combining wetness indicators with climate variables to assess greenhouse gas emissions more accurately. These efforts will further enhance the utility of our framework for addressing diverse stakeholder needs. Initial validation across eleven test sites highlights the framework's adaptability to diverse peatland ecosystems. For peat extent mapping, we conducted agreement and disagreement analysis using multiple reference datasets, including Global Peatland Map (GPM), UKCEH, CORINE, Congo-Peat, ESA, MODIS, ESA CCI, and ESRI Land Cover datasets. By comparing these datasets, we identified significant discrepancies in peat extent estimates, particularly in regions with complex hydrology and vegetation cover. For instance, in the UK, notable differences were observed between GPM and UKCEH, emphasizing the need for robust validation and harmonization. Our analysis provides a valuable training dataset to improve the accuracy and consistency of future peatland mapping efforts, contributing to a more reliable assessment of global peat carbon stocks and climate mitigation potential. This work contributes to advancing global peatland conservation by bridging the gap between Earth Observation capabilities and stakeholder requirements. The developed methodology provides a foundation for consistent, long-term monitoring of peatland ecosystems, supporting both policy decisions and practical conservation efforts. Future developments will focus on expanding the validation network through continued stakeholder engagement and incorporating emerging satellite sensors for enhanced monitoring capabilities.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.96/0.97)

Presentation: SAR coherence and backscatter time series for monitoring restored, rewetted, abandoned and natural peatlands

Authors: Koreen Millard, Vincent Ribberink, Tauri Tampuu, Ain Kull
Affiliations: Carleton University, KappaZeta, University of Tartu
Although peatlands are gaining increased recognition as important habitats and stocks of carbon, they continue to be threatened due to anthropogenic pressures, including the extraction of peat, drainage for agriculture and forestry, and climate change. The status of restoration and rewetting is important to monitor in order to ensure biodiversity conservation and greenhouse gas emissions reduction targets are met [1]. Peatlands of different status (natural, abandoned extractions, restored, rewetted) exhibit different vegetation, soil and ecohydrological characteristics. Synthetic Aperture Radar (SAR) offers a method to explore the spatial and temporal changes in surface and vegetation conditions within these ecosystems. This research demonstrates the use of time series SAR backscatter and coherence at natural and restored peatlands across Canada. At seven natural peatland sites across Canada, soil moisture data were acquired from Ameriflux and field data collection efforts spanning the Sentinel-1 time data availability (e.g. spring 2017 to the end of the respective soil moisture time series, where end dates varied by station). There is no soil moisture data available for restoration and rewetted sites, however, the spatial location is available for >200 locations across Canada where the Moss Layer Transfer Technique (MLTT) and other restoration and rewetting techniques (e.g. ditch blocking) were applied. To provide a comparison to these restoration and rewetted sites, natural peatlands, abandoned extraction sites, and active extraction sites were also acquired within 10 km of the restoration/rewetted sites. Correlation between soil moisture and backscatter in natural sites was highly variable by site, but soil moisture time series were compared with Sentinel-1 backscatter time series using the Seasonal and trend decomposition using Loess ([2]) to analyze long term trends in both the soil moisture and backscatter, with a goal of determining if any sites were in drought, long term wetting or exhibited no changes in hydrology. The STL method allows us to remove the variation due to seasonality (e.g. phenology) and focus on the change over time that is not regular [3]. This analysis indicated many similar trends in backscatter and soil moisture, but significant differences between peatlands with different surface conditions (e.g. peatlands with many ponds vs peatlands dominated by Sphagnum lawns). Interferometric coherence indicates the similarity of a pixel between two 12-day image pairs and the presence of vegetation (trees and shrubs) usually results in low coherence. Despite the trees and shrubs that exist in some types of natural peatlands, they often exhibit high coherence and InSAR coherence and displacement has been used in peatland ecosystem mapping [4], [5] and in estimating surface height changes due to bog breathing [6], water table and soil moisture conditions in peatlands [7], [8]. In this study, coherence for the natural peatlands, active extraction and restored sites were extracted at 50 m spatial resolution for each date pair between 2017 and the end of the soil moisture time series for each site in natural sites, and until August 2024 in disturbed sites (e.g. restored/rewetted/abandoned). Generally, coherence was significantly lowest in rewetted sites and within sites demonstrated the greatest variability in coherence. This likely indicates that the water regime is becoming less homogenous and significant vegetation (e.g. upland trees such as birch) has grown in some parts of the rewetted sites. This was not the same in the MLTT restored or drier abandoned sites. Other classes were similar to each other in fall and spring, but in summer there was a clear distinction in coherence in Natural Peatlands and MLTT peatlands in comparison with abandoned, rewetted and active extraction. While natural peatlands did demonstrate significantly higher coherence than restored sites in summer, the restored peatlands were more similar to natural peatlands than other classes indicating that these sites are beginning to show similar vegetation and moisture conditions to natural peatlands. An analysis of coherence in comparison with time since restoration indicated lower coherence beyond 15 years since rewetting but no significant differences in coherence appeared to be related to time since restoration. This may also highlight the likelihood that these sites are gradually being claimed by upland vegetation. These findings underscore the potential of SAR backscatter and coherence time series analysis to provide critical insights into the ecohydrological dynamics of peatlands under various management conditions. By distinguishing trends in moisture and vegetation, SAR backscatter and coherence can support the effective monitoring of restoration efforts in these vital carbon-rich ecosystems. [1] E. B. Barbier and J. C. Burgess, “Economics of Peatlands Conservation, Restoration and Sustainable Management,” SSRN Electron. J., 2024, doi: 10.2139/ssrn.4695533. [2] 6.6 STL decomposition | Forecasting: Principles and Practice (2nd ed). Accessed: Nov. 30, 2024. [Online]. Available: https://otexts.com/fpp2/stl.html [3] K. Millard., S. Darling, N. Pelletier, and S. Schultz, “Seasonally-decomposed Sentinel-1 backscatter time-series are useful indicators of peatland wildfire vulnerability,” Remote Sens. Environ., vol. Accepted, In press, 2022. [4] K. Millard, P. Kirby, S. Nandlall, A. Behnamian, S. Banks, and F. Pacini, “Using Growing-Season Time Series Coherence for Improved Peatland Mapping: Comparing the Contributions of Sentinel-1 and RADARSAT-2 Coherence in Full and Partial Time Series,” Remote Sens., vol. 12, no. 15, Art. no. 15, Jan. 2020, doi: 10.3390/rs12152465. [5] N. J. Pontone, “The Classification and Characterization of Canadian Boreal Peatland Sub-classes,” Master of Science, Carleton University, Ottawa, Ontario, 2023. doi: 10.22215/etd/2023-15739. [6] T. Tampuu, F. De Zan, R. Shau, J. Praks, M. Kohv, and A. Kull, “CAN Bog Breathing be Measured by Synthetic Aperture Radar Interferometry,” in IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia: IEEE, Jul. 2022, pp. 16–19. doi: 10.1109/IGARSS46834.2022.9883421. [7] T. Tampuu, J. Praks, F. De Zan, M. Kohv, and A. Kull, “Relationship between ground levelling measurements and radar satellite interferometric estimates of bog breathing in ombrotrophic northern bogs,” Mires Peat, vol. 29, no. 17, pp. 1–28, Aug. 2023, doi: 10.19189/MaP.2022.OMB.Sc.1999815. [8] T. Tampuu, J. Praks, R. Uiboupin, and A. Kull, “Long Term Interferometric Temporal Coherence and DInSAR Phase in Northern Peatlands,” Remote Sens., vol. 12, no. 10, Art. no. 10, Jan. 2020, doi: 10.3390/rs12101566.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall L1/L2)

Session: F.02.09 The Space for Climate Observatory Initiative: accelerating the deployment of digital solutions for climate change adaptation

The Space for Climate Observatory (SCO) is an international initiative which aims to support the development of Earth-observation based operational tools for climate adaptation, mitigation and monitoring at local level, at the nearest of the users. Currently, 53 signatories are part of the International Charter in total, representing 28 countries and 6 international organizations. It is a portfolio of 123 projects related to many thematic: ocean, coastal areas, biodiversity, extreme events; agriculture, water etc.

To operate effectively, the SCO has established global governance bodies but primarily relies on more or less structured local implementations. These local implementations are crucial for generating projects, proposing synergies between private ecosystems and research, public policies, public funding, and local climate challenges.



This session will present how local interfaces help bridge the gap between science, users, and decision-makers, with examples from Europe, France, the UK and Norway. It will also showcase projects that have delivered concrete tools to end-users.



Agenda:

1. Introduction – Presentation of the SCO with a focus on SCO France

2. Roundtable – From Science to Users: the role of SCO and local interfaces in turning space data into action

Speakers: NOSA, Space4Climate, ESA, ACRI-ST, and a researcher on EO governance.

3. Project Pitches – Operational tools from SCO addressing real-world needs (e.g. agriculture, carbon, coasts, forests).

Speakers: MEOSS, GlobEO, Hytech-Imaging, CNES, Hydromatters

Convenors: Claire Macintosh (ESA), Frédéric Bretar (CNES)

Moderators:


  • Frédéric Bretar - Head of the Space for Climate Observatory (SCO), CNES (French Space Agency)
  • Alexia Freigneaux - International Development Officer for the Space for Climate Observatory (SCO), CNES (French Space Agency)

Speakers:


  • Susanne Mecklenburg - Head of the Climate Office, ESA (European Space Agency)
  • Anja Sundal - Senior Adviser, Science and Earth Observation, NOSA (Norwegian Space Agency)
  • Krupa Nanda Kumar - Climate Services Development Manager, Space4Climate
  • Antoine Mangin - Scientific Director, ACRI-ST
  • Dorian Droll - Researcher, CNES-INSP
  • Thomas Ferrero - CEO, MEOSS
  • Stéphane Mermoz - CEO and Research Scientist, GlobEO
  • Marie Jagaille - Product Line Manager, Hytech-Imaging
  • Vincent Lonjou - Earth Observation Downstream Application Project Manager, CNES (French Space Agency)
  • Adrien Pâris - HydroMatters
  • Swed-Coast Blue Carb - TBC

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall G2)

Session: C.05.09 EO National Missions Implemented by ESA - Setting the Scene

The session will be used to introduce the different National Projects under implementation at ESA, and to exchange about challenges and opportunities ahead.

Speakers:


  • S Lokas – ESA
  • Konstantinos Karantzalos – Secretary General, Greek Ministry of Digital Governance and Greek Delegate to the ESA Council
  • Dimitris Bliziotis – Hellenic Space Centre and Greek delegate to PBEO
  • G. Costa – ESA
  • F. Longo – ASI
  • D Serlenga – ESA
  • Head of Delegation to ESA – MRiT
  • R. Gurdak – POLSA
  • L. Montrone – ESA
  • N. Martin Martin / J.M. Perez Perez – (Affiliation not specified)
  • Pedro Costa – CTI
  • Betty Charalampopoulou – Geosystems Hellas CEO and BoD Hellenic Association of Space Industry
  • Dr. hab. inż. Agata Hościło – Institute of Environmental Protection – National Research Institute
  • A. Taramelli – ISPRA
  • V. Faccin – ESA
  • R. Lanari – CNR/IREA
  • M. Manunta – CNR/IREA
  • L. Sapia – ESA
  • E. Cadau – ESA
  • Rosario Quirino Iannone – ESA
  • Mario Toso – ESA
  • Enrique Garcia – ESA
  • Ana Sofia Oliveira – ESA
  • Ariane Muting – ESA
  • V. Marchese – ESA
  • Jolanta Orlińska – POLSA
  • G. Grassi – ESA
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall G1)

Session: C.03.08 The European Copernicus Space component: status, future prospects and challenges - PART 1

Copernicus is the European Earth monitoring program which opened a new era in Earth Observation with continuous and accurate monitoring of our planet and continuous improvement to respond to the new challenges of global change.
Since it became operational in 2014 with the launch of the first dedicated satellite, Sentinel-1A, Copernicus has provided a wealth of essential, timely and high-quality information about the state of the environment, allowing borderless environmental and emergency monitoring, and enabling public authorities to take decisions when implementing European Union policies.
The intense use and increased awareness for the potential of Copernicus have also generated great expectations leading to an evolved Copernicus system that has embraced emerging needs, new user requirements and a new commercial dimension.
This future evolution of the Copernicus program will fill observational gaps and will help monitor the “pulse” of our planet for the decades to come, but to do so, programmatic and budgetary commitments will need to be maintained.

Presentations and speakers:



Sentinel-1C transfer of ownership side event


  • S. Cheli - ESA, Director of Earth Observation Programmes
  • M. Facchini - EC, DG DEFIS

S. Cheli's introductory key speech


The European Union in the Copernicus Space Component


  • M. Facchini - EC, DG DEFIS

ESA and the Copernicus Space Component: present and future perspectives


  • P. Potin - ESA, Head Copernicus Space Office

The future Copernicus Sentinel satellite missions


  • P. Bargellini - ESA, Copernicus Space Segment Programme Manager

The Copernicus Sentinel missions and data management framework: European excellence in high quality data and services


  • B. Rosich - ESA, Head Copernicus Ground Segment and Data Management Division

The Copernicus current Sentinel satellite missions: Sentinel-1


  • N. Miranda - ESA, Sentinel-1 Mission Manager
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.14)

Session: A.03.07 The ESA-NASA Carbon Budget Reconciliation Challenge

Earth Observation plays a critical role in supporting the estimation of greenhouse-gas fluxes between land, ocean and atmosphere. Space-based optical, radar and lidar instruments currently provide information on vegetation state (biomass), land-use change, land dynamics, and greenhouse-gas concentrations that are used in bottom up or top-down modelling approaches to support scientific understanding of the carbon cycle and hence inform policy applications.
The NASA-ESA Carbon Budget Grand Challenge has been established to help reconcile bottom-up and top-down estimates of Greenhouse Gas Emissions in response to one of the key recommendations from the Fourth Carbon from Space workshop. It coincides with the conclusion of the Global Carbon Project’s Second Regional Carbon Cycle and Processes (RECCAP2) study (AGU special collection). RECCAP2 has identified several challenges that can be addressed to improve the timeliness, coordination, and methodologies used for RECCAP3 (2020-2029). These include supporting and training of early career scientists, provisioning datasets using cloud based tools and standard formats, including datacubes, and developing a low-latency workflow for implementing tiered budgets of varying complexity and annual to multi-annual cadence. The Reconciliation Challenge will address the challenges raised by RECCAP2 through a NASA-ESA partnership that will provide coordination and early career support to address the following key tasks:
•A synthesis of RECCAP2 in the context of reconciling bottom-up and top-down budgets including lessons learned especially in the context of EO. The synthesis will help prioritize planning for RECCAP3 in terms of data needs from EO and identify a path forward for sub-regional and national scale GHG budgets.
•A dedicated effort to coordinate EO contributions to the development of annual updates on the status and dynamics of the terrestrial carbon cycle. that leverage existing and planned NASA, ESA and other space agency satellite missions (OCO2/3, ICESAT-2, GEDI, TROPOMI, Sentinel-1/2, BIOMASS, NISAR, SWOT), as well as identifying datasets and modelling frameworks to improve top-down and bottom-up reconciliation in intermediate years leading to the third RECCAP study.
•The establishment of a low-latency framework for GHG budgets to help align the RECCAP process better with the Global Carbon Budget exercise.
•Provide leadership opportunities and involvement to Early Career Scientists who contributed to RECCAP2 and those who are enthusiastic to be involved with RECCAP3.

Session Structure
This invited insight will:
•Introduce the ESA-NASA Carbon Budget Reconciliation Challenge
•Review and consolidate the tasks to be conducted from both sides of the Atlantic.
•Engage the wide community involved in the Carbon Cycle Science envisaged to attend LPS
•Establish plans for tasks to be undertaken to help resolve issues associated with carbon budget calculations and in particular needs for rapid updating using Earth Observation.
•Establish the mechanisms for engaging communities from both sides of the Atlantic through training schools and exchange visits.
•Develop a community paper dedicated to improving coordination of EO contributions to Regional and Global Carbon Budgets and with a focus on improving EO product latency, and their use to provide updates in the periods between the current and future RECCAP exercises and hence provide key datasets on change for RECCAP.

Session Agenda


Introduction to the Carbon Budget Reconciliation Challenge


  • Stephen Plummer

Science Talks


GCB, RECCAP, Insights from TRENDY and the Need for Benchmarking


  • Mike O’Sullivan

Establishing the NRT Budget Scheme


  • Philippe Ciais

The Carbon Cycle Viewed from the US


  • Ben Poulter

NextGenCarbon and CONCERTO


  • Ruben Valbuena / Manuela Balzarolo

EO-LINCS and Data Harvesting for RECCAP


  • Jake Nelson / Sujan Koirala

Reserve Talk – THRAC3E (if project is awarded)


  • TBD

Round Table


Why is the Carbon Budget Reconciliation Needed? What is the Problem?


  • Ben Poulter, Philippe Ciais, Mike O’Sullivan, Sophia Walther, Manuela Balzarolo

Future Directions


Establishment of a Coordinated Approach from EO – Open Discussion


Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Session: D.01.01 Collaborative Innovation: building a Digital Twin of the Earth System through Global and Local Partnerships

The concept of a Digital Twin of the Earth System holds immense potential for revolutionizing our understanding and management of our planet. However, building such a complex and comprehensive system requires a global effort. This session explores the power of collaborative innovation in bringing together diverse stakeholders to create a robust and impactful Digital Twin Earth.

In this session, we invite contributions to discuss the following key topics:

- International Collaborations and Global Initiatives
We seek to highlight major international collaborations, such as ESA's Digital Twin Earth and the European Commission's Destination Earth, which exemplify the collective effort needed to develop these advanced systems. Contributions are welcome from successful international projects that demonstrate the potential for global partnerships to significantly advance the development and application of the Digital Twin Earth.

- Public-Private Partnerships (Industry and Academia Collaborations)
We invite discussions on innovative models for funding and resource allocation within public-private partnerships, which are crucial for sustainable development and effective environmental monitoring. Contributions from tech companies and startups that have been instrumental in developing key technologies for the Digital Twin Earth are especially welcome, showcasing the private sector's vital role in this global initiative.

- Local and Community Engagement
Engaging local communities and fostering grassroots initiatives are essential for the success of the Digital Twin Earth. We invite contributions that discuss the role of citizen scientists in data collection, monitoring, and validation efforts. Examples of training and capacity-building programs that empower local communities and organizations to actively participate in and benefit from these advanced technologies are also sought. Additionally, we welcome examples of successful local collaborations that highlight the positive impact of digital twin technologies on environmental monitoring and resilience.

- Multi-Disciplinary Approaches
Addressing the complex challenges of developing a Digital Twin Earth requires a multi-disciplinary approach. We seek contributions that integrate diverse expertise from climate science, data science, urban planning, and public policy to create comprehensive digital twin models. Discussions on developing standards and protocols for interoperability and effective data sharing among stakeholders are critical for holistic problem-solving and are highly encouraged.

- Policy and Governance Frameworks
We invite contributions that explore policy and governance frameworks supporting the development of policies for sustainable development and climate action. Effective governance structures that facilitate collaboration across different levels of government, industry, and academia are crucial. Additionally, we seek discussions on addressing ethical, privacy, and regulatory considerations to ensure the responsible use of digital twin technologies.

By fostering international collaborations, leveraging public-private partnerships, engaging local communities, integrating diverse expertise, and developing robust policy frameworks, this session aims to collectively advance the development of the Digital Twin Earth. This holistic approach ensures that the Digital Twin Earth is not only a technological marvel but also a collaborative, inclusive, and impactful tool for sustainable development and environmental resilience.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: GTIF Austria: Bridging International Developments in Snow Science and Hydrology With Local Decision-Making in the Hydropower Sector Through a Digital Twin Framework.

Authors: Maxim Lamare, Dr Matteo Dall’Amico, Federico Di Paolo, Stefano Tasin, Nicolò Franceschetti, Dr Johannes Schober, Dr Mario Strigl, Dr Gerhard Triebnig, Konstanze Fila
Affiliations: Sinergise Solutions GmbH, Waterjade Srl, TIWAG-Tiroler Wasserkraft AG, EOX IT Services GmbH, FFG (Austrian Research Promotion Agency)
The climate crisis stands as one of the most pressing global challenges we face, profoundly impacting ecosystems, economies, and societies worldwide. In response to this urgent need for climate action and sustainability, ESA launched the Space for a Green Future (S4GF) Accelerator in 2021, aiming to harness Europe’s space innovation to accelerate the Green Transition towards a carbon-neutral, sustainable and resilient society. At the heart of this initiative lie the Green Transition Information Factories (GTIF), which are based on a cloud based platform infrastructure providing a portfolio of digital, geo-related information services (“GTIF Capabilities”). GTIFs focus on the use of data from Earth Observation in order to empower decision-making for climate change adaptation and ecological change. Building on the developments of the first GTIF demonstrator (https://gtif.esa.int/), the current GTIF-Austria (also referred to as “Digital Twin of Austria”) initiative is expanding the existing set of capabilities with a focus on the transition to carbon neutrality by 2050. Amongst the numerous “Capabilities” of the GTIF-AT being implemented over the period 2024-2026, the “Energy transition with hydropower” project focuses on enhancing water management and hydropower operations by integrating advanced snowpack data into hydrological forecasting systems aiming to improve reservoir management and optimising energy production as well as improving flood management. The main service providing information about the snowpack stems from the international ESA-funded Digital Twin Alps demonstrator (https://digitaltwinalps.com/) and produces daily maps of modelled and forecasted snow metrics including snow water equivalence (the amount of water stored in the snowpack), snow depth, snow-covered area and melt rates. These snow products are then integrated into a short-term and seasonal runoff forecast model, improving the knowledge of the water stored in the catchment and indirectly flood risk and energy production potential. While numerous Digital Twin initiatives such as the Digital Twin Alps have been innovative in their technological approaches, they often lack strong connections with stakeholders and fall short in translating their potential into practical, real-world applications. In this GTIF-Austria project, the services developed are directly being put to use by TIWAG, a local state-owned electricity generation and distribution company in Tyrol, Austria, to improve their hydrological models including a flood forecast system jointly used by the state of Tyrol. By directly involving the stakeholder as a partner within the project consortium, we ensure direct alignment between the features of the Digital Twin and their practical applicability. Iteratively developing the information services of the GTIF hand-in-hand with the stakeholder will ensure that the hydrological solutions available in the platform fit the needs of the market, align with operational realities, and deliver tangible value. Furthermore, by leveraging TIWAG’s extensive network within the Alps, the project will expand its reach, fostering collaboration and enabling the adoption of these advanced solutions across the broader hydropower sector.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: UrbanSquare: An Integrated Climate Risk Assessment Tool for Urban Areas on the Destination Earth Platform

Authors: Fabien Castel, Camille Lainé, Hugo Poupard, Sabrina Outmani, Leon Staerker, Melih Uraz, Mariana Damova, Dr Stanko Stankov, Hristo Hristov, Hermand Pessek, Dr Emil Stoyanov
Affiliations: Murmuration, Sistema, Mozaika
UrbanSquare is an innovative use case funded by ESA to be deployed on the Destination Earth platform, developed by a consortium led by Murmuration in partnership with Sistema, Mozaika, and Imperative Space. Designed to offer urban planners a comprehensive tool to assess and monitor climate risks in urban environments, UrbanSquare provides a holistic view of critical risk factors: urban heat islands, flooding, sea level rise and storm surges, air pollution, infrastructure deterioration, and increased resource demand. The service is built around a modular architecture where each risk theme is addressed by a component developed with a specific demonstrator and end-user acting as the product owner. UrbanSquare operationalizes risk assessment by integrating data from diverse sources, including the Destination Earth Digital Twins, Copernicus datasets, Landsat observations, ESA WorldCover, and additional open or commercial data such as Planet HR imagery, OpenStreetMap, and Eurostat socio-economic data. UrbanSquare is designed to scale by leveraging standardized, globally available datasets and state-of-the-art software that is seamlessly integrated into the Destination Earth System Platform (DESP). While five of the six components are natively integrated within DESP to exploit its data, services, and ICT infrastructure, the flood component stands out as a federated application using the platform’s data and service APIs. A key feature of UrbanSquare is its dual temporal approach, providing not only a retrospective analysis of historical and current data but also forward-looking projections of future what-if scenarios. This functionality empowers urban planners to anticipate climate risks and develop proactive adaptation and mitigation strategies. The initial implementation focuses on local demonstrator sites, where the service can be refined and validated. However, the use of globally consistent datasets ensures the tool’s scalability, enabling rapid adaptation to a growing number of deployments worldwide. UrbanSquare thus represents a significant step toward equipping cities with actionable insights for climate resilience, fostering informed decision-making and sustainable urban development in the face of accelerating climate change. The air quality monitoring component utilizes an AI-driven super-resolution model to enhance the spatial resolution of NO₂ concentration data from 10 km (Copernicus Atmosphere data) to 1 km. It integrates various datasets, including meteorological data (ERA5), environmental factors (topography, land cover), and human activity indicators (traffic, population density). The system delivers daily, near-real-time 1-km air quality maps, supporting interactive visualization, data export, and detailed time-series analysis. Users can simulate "what-if" scenarios, such as changes in traffic patterns, urban density, or climate conditions, to assess the potential impacts on air quality. The platform includes features for evaluating influential factors and allows customization according to WHO or national air quality standards. Designed for ease of use, the tool empowers users to monitor, analyze, and project air quality trends for informed decision-making in urban and environmental planning. The urban heat monitoring component provides a heat exposure indicator to identify urban heat islands, utilizing land use and vegetation data. It employs Land Surface Temperature (LST) data from Landsat 8/9, combined with climate change projections from the DestinE platform, to model and project heat wave impacts under various Shared Socioeconomic Pathways (SSPs). Through pixel-wise linear regression, the tool computes LST projections for moderate to extreme heat waves (30°C to 45°C) and calculates the annual frequency of extreme heat days under future climate scenarios. It supports interactive visualization, enabling urban planners to compare thermal conditions across neighborhoods, assess the impact of urban development and renovation, and evaluate policy measures such as vegetation management. Designed with an interactive dashboard, it empowers stakeholders to simulate scenarios and make informed decisions for urban climate resilience. The Sea Level Rise and Storm Surges component is built upon long-term mean sea surface height projections and aims at producing a comprehensive depiction of inundation risk in coastal areas, which are particularly vulnerable to climate change. The globally available tool allows to generate what-if scenarios between 2040 and 2150 by varying SSPs and storm surge heights, which are added to the predicted sea level retrieved from the IPCC dataset (AR6). The latter is integrated with Copernicus Digital Elevation Models (DEMs) and ESA Waterbodies layers to compute inundation maps. Furthermore, datasets from the Copernicus Global Human Settlement and ESA WorldCereals are used to produce exposure assessments in terms of population, built-up surface and cultivated areas affected by the floods. Thanks to the integration with DestinE Climate Change and Adaptation DT data, higher resolution inundation products can be generated over Europe providing a better understanding of the impact of sea level rise. The Flood component provides advanced forecasting and simulation capabilities for managing flood risks. It employs the ISME-HYDRO® (http://isme-hydro.com) novel EO4AI method using earth observation data for meteorological features with relevance for hydrological status of rivers like precipitations, soil moisture, vegetation index, snow cover combined with in-situ measurements applied on pipelines of neural network architectures in order to generate forecasts for river discharge and water level with models predicting hydraulic status for up to 30 days ahead, and then introduces digital elevation data to obtain the projected flood plains. To simulate flood scenarios the integrated into DESP tool offers customizable simulations, allowing users to specify water levels, precipitation increases, and flood events for specific locations and timeframes. These simulations generate detailed maps and data tables and visualize their impacts to aid in impact analysis to support municipalities in planning and responding effectively. ISME-HYDRO® application, based on a complex intelligent e-Infrastructure, that demonstrates federated integration with DESP (http://destine.isme-hydro.com), helps users predict flood spans, evaluate flood risks, and assess affected land and infrastructure in high-risk areas. With its user-friendly interface, interactive maps, and scenario-building features, the component is essential for policymakers and municipal officials to mitigate flood damages, enhance preparedness, and protect communities and assets under diverse flood conditions. The Resources component is blended in the Flood component by integrating socio-economic data to quantify the potential impacts of climate risks, focusing primarily on floods. For example, it provides insight about the damages caused by floods by identifying and quantifying the infrastructure – buildings and roads – that fall inside the flooded areas, as well as agricultural fields and forest areas. The component supports municipal officials in three key areas: evaluating flood damages, estimating recovery resources, and planning recovery efforts. The Resources component streamlines data analysis for critical decision-making, enabling municipalities to effectively allocate resources and design recovery strategies post-flood events. Its goal is to bridge data insights and actionable recovery measures, ensuring efficient response and mitigation. The Infrastructure component aims at producing updated maps of roads from high resolution satellite images, analysing the land cover along the roads and highlighting the areas that would need action or restoration. By harnessing the capabilities of the Destination Earth platform and leveraging a robust consortium-driven approach, UrbanSquare aims at making a transformative impact on how urban planners address the multifaceted challenges of climate risks in cities worldwide.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: EnvironTwin: A Digital Twin for Environmental Monitoring Project

Authors: Mahtab Niknahad, Simone Tritini, Michele Claus, Roberto Monsorno, Abraham Mejia-Aguilar
Affiliations: Eurac Research, Technician, Senior Researcher, Head of Center, Senior Researcher
The EnvironTwin project seeks to enrich the Environmental Data Platform (EDP) [1] by implementing a Digital Twin (DT) service to represent, model, and forecast key alpine environmental scenarios. EDP was previously developed for the FAIR management of environmental data resources at Eurac Research, to provide stakeholders with actionable insights into ecosystem dynamics, risks, and management strategies. EnvironTwin leverages technologies such as in-situ sensors, proximal sensing, satellite imagery, cloud computing, and advanced data modeling. The project addresses critical challenges posed by human activities and their impacts on agriculture, forestry, and environmental conservation as well as some limitations of technology. Objectives: The project aims to: (ι) Commission advanced instrumentation and sensor technology for environmental monitoring and digital shadowing. (ιι) Establish and integrate computing infrastructure within the existing Environmental Data Platform. (ιιι) Implement a Continuous Integration and Continuous Development (CI/CD) environment to streamline digital twin creation. (ιν) Combine heterogeneous data sources into a unified framework for robust digital twin modeling. Challenges: Mountain and Alpine environment monitoring systems face technological, organizational, and operational limitations. The sudden change in orography, the weather variability, different climate zones, and a wide variety of human activities make it difficult to use one single monitoring strategy. EnvironTwin integrates different natures of instrumented systems, from ground to remote sensing approaches. However, one of the primary challenges is acquiring high-quality, heterogeneous data and building an operative infrastructure to effectively integrate these data into simulation models to suggest different possible scenarios in four key alpine use cases: Grasslands, Forestry, Agrovoltaic and Natural Hazards.. Above all, every use case is followed by a scientific supervisor, making environTwin a unique multi-disciplinary approach. Use Cases: (ι) Grassland management detection: Grasslands in South Tyrol are managed by small farming businesses. Grazing, mowing and harvesting, and fertilization are some of them. However, every farmer follows a different management strategy, making it challenging to generalize monitoring systems. EnvironTwin evaluates the effectiveness of high-resolution Planet satellite data in overcoming spatial and temporal monitoring events in South Tyrol. The optimization will be achieved by incorporating Sentinel-2 imagery and webcam-derived reference data, thereby refining the spatial and temporal resolution for agricultural applications. (ιι) Forest structural diversity and modeling: Ground and proximal sensing-based data on forest structural diversity foreseen the monitoring of individual trees distributed in a 1500-meter elevation profile. EnvironTwin integrates heterogeneous data sources into forest dynamics models, enabling predictions of forest adaptation and growth under changing climatic conditions. This data supports the creation of a digital twin of forests for current and future scenario adaptation. (ιιι) Agri-Voltaic Systems: By modeling the interactions between photovoltaic (PV) systems and agricultural practices, digital twins optimize dual land use for energy generation and crop cultivation. Climatic and weather variability, including droughts and extreme temperatures, are considered to ensure resource efficiency and sustainability. The main objective is to forecast energy and fruit production based on IoT data. (ιν) Rock glacier deformation: Traditional methods to monitor rock glacier deformation rely on the monitoring of medium-sized boulders using GPS. Nevertheless, the integration of proximal sensing (LiDAR, Thermal, and RGB) allows the possibility to identify hot spots, as well as monitor rotational, gravitational, material deposition, and geological structures at a very small scale. In this use case, these technologies are applied in the Lazaun Senales Valley, Italy, to extract environmental variables such as temperature and time. The collected data is incorporated into a digital twin (DT) model, which provides valuable insights for developing strategies to mitigate and manage natural hazard risks. Pilot Study: Alongside the four use cases planned for the project, a pilot case has been developed to test the infrastructure from which to start and then adapt to other application fields. The pilot study was set in the Laimburg field, South Tyrol in Italy, and it focuses on predicting soil moisture and precipitation using in situ sensors, satellite data (temperature, water, and soil moisture indices), and weather station observations. Algorithms such as LightGBM, CNNs, LSTMs, and regression models are applied to analyze data, predict irrigation needs, and enhance water management. Impact: EnvironTwin fosters collaboration among researchers, public authorities, and businesses by creating synergies with predictive modeling experts and showcasing the potential of digital twin technology through the research and development ecosystem. The project’s outcomes include scalable, data-driven tools for managing environmental challenges, validated through interdisciplinary case studies. By demonstrating its applicability across diverse fields, EnvironTwin establishes a foundation for sustainable, resource-efficient environmental management. The research leading to these results has received funding from the European Regional Development Fund, Operational Programme Investment for jobs and growth ERDF 2021-2027 under Project number ERDF1045 Service for the development of digital twins to predict environmental management scenarios through dynamic reconstruction of the Alpine environment, EnvironTwin. [1] FAIRsharing.org: EDP; Environmental Data Platform, DOI: 10.25504/FAIRsharing.e4268b, Last Edited: Monday, October 9th 2023, 18:54, Last Accessed: Wednesday, November 29th 2023, 10:19
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: How Earth observation, citizen science, automated sensors and models are bringing Lake Geneva to life

Authors: Daniel Odermatt, Abolfazl Irani Rahaghi, Damien Bouffard, James Runnalls, Laurence Haller, Pasche Natacha
Affiliations: Eawag - Swiss Federal Institute of Aquatic Research, Department of Geography, University of Zurich, Eawag - Swiss Federal Institute of Aquatic Research, Limnology Center, École Polytechnique Fédérale de Lausanne (EPFL)
Lakes are fascinating ecosystems with discrete yet permeable spatial boundaries, within which biological, chemical and physical processes overlap. Therefore, lake research is geographically focussed, but thematically diverse. To meet the challenges associated with this interdisciplinarity, we pursue a vision for lake research that utilises and combines various sources of information, including in situ measurements, remote sensing and model simulations. To this end, we have developed a digital infrastructure in the ESA projects CORESIM and AlpLakes, which, in the case of Lake Geneva, can be used as a digital twin for the detailed simulation of limnological processes. A similar yet reduced approach was scaled to more than 80 other lakes in the Alpine region (www.alplakes.eawag.ch). Lake Geneva is regarded as one of the first subjects of limnological research in the nineteenth century, when standing waves were first investigated in the lake. In the second half of the 20th century, like many lakes in Europe, Lake Geneva was subject to pronounced eutrophication. The Franco-Swiss Commission CIPEL was founded in 1962 to curb this development by means of international coordination, and in 1980, civil society organised itself into another cross-border NGO for the protection of Lake Geneva, the Association pour la Sauvegarde du Léman (ASL). Nevertheless, the lake's degree of eutrophication remains high, which occasionally leads to disturbing algal blooms. Added to this is the warming of the lake due to climate change, which is affecting its vertical mixing, and the introduction of invasive species such as the quagga mussels (since 2015). This complex interplay of impairments is both, hard to understand and difficult to communicate. Using the concept of a digital twin, we aim to improve the scientific understanding of this interplay, and to support the exchange of knowledge between researchers, authorities and civil society. We thereby depend on the exceptional availability of measurements, model simulations and Earth observation data products for Lake Geneva. Five limnological research institutes jointly installed the LéXPLORE research platform (https://lexplore.info/) off the northern shore of the lake in February 2019, where they are acquiring a unique variety of measurements for all domains of lake research. Earth observation satellite data has been used to support research on Lake Geneva for more than two decades. Today, we benefit from the availability of a variety of continuous LéXPLORE measurements for the validation and interpretation of Earth observation data, including unique hyperspectral measurements of aquatic absorption and scattering that are acquired by a profiler several times a day. The use of operational three-dimensional models provides deeper insights into visible changes on the lake surface and the underlying causes. An extraordinary bloom of golden algae, namely Uroglena Sp., in September 2021 provides a vivid application example for the concept of digital twins. Such a bloom had not occurred since 1999 and was hardly expected due to declining phosphorus concentrations and the changing climate. Sentinel-2 images show the pronounced spatial heterogeneity of the bloom, due to which representative in situ measurements are rare. Using hydrodynamic simulations, we reconstructed the circulation around the time of the bloom, and the tracking of particles indicated its geographic origin. The analysis of hydrological and meteorological data provided further indications of the combination of external conditions that enabled the bloom, conditions that had not occurred in this combination since 1999. The use of various data and tools has therefore made a decisive contribution to improving process understanding. A lively exchange between science and the public in the Léman region was established with the support of ASL and the University of Lausanne in the past year. We registered 600 participants. We have equipped 250 citizen scientists, who are regularly out on the lake, with Secchi discs to measure transparency. After transmission with a smartphone, their measurements are visualized in a web portal with coinciding Sentinel-3 Secchi depth products (https://lemanscope.org/). Among over 2000 measurements, 400 match Sentinel-3 overpasses. Their unique spatial distribution enables an improved estimation of the uncertainties of Sentinel-3 products, in particular their dependence on the proximity to the shore. In turn, improved water transparency products from Sentinel-3 can be used to parameterize hydrodynamic models. We inform in regular seminars about visible properties of the lake, such as the strong turbidity caused by inorganic particles in summer 2024, and how they affect the measurements. For other current topics such as the spread of invasive quagga mussels, we organise lectures by selected specialists. The interdisciplinary use of different information technologies enables new insights in research and is an important tool for communicating with the public. But it is also complex and costly, and scaling it requires collaborative approaches. This is why all the software we use is openly available. In the new EU Interreg AlpineSpace project DiMark (https://www.alpine-space.eu/project/dimark/), we promote the further dissemination of our tools for lakes across the entire the Alpine region.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: Digital twin politics: Unlocking the full potential of digital twins for sustainable ocean futures

Authors: Associate Professor Alice Vadrot, PhD Researcher Carolin Hirt, PhD Researcher Emil Wieringa Hildebrand, PhD Researcher Felix Nütz, PhD Researcher Wenwen Lyu
Affiliations: University of Vienna
The concept of a Digital Twin of the Ocean (DTO) represents a potentially significant leap in advancing ocean knowledge and fostering sustainable action, while having the potential to significantly reshape the interface between science and politics: Under many multilateral environmental agreements, DTOs can be crucial for supporting intergovernmental efforts to monitor progress in achieving environmental protection goals including in the areas of marine biodiversity, deep-seabed mining, fishing, shipping and plastic pollution. Despite rapid technological progress and rapid expanding range of potential applications, research into the social and political dimensions of DTOs remain underdeveloped. This gap is particularly concerning, as we argue that DTOs are inherently contested, ambiguous and political: Firstly, DTOs can risk exacerbating global inequalities, given the unequal capacities to develop, access, and utilize ocean data, information, and DTO models and technologies. Secondly, they introduce a range of legal and political challenges, including uncertainties around data access, ownership, security, and sharing. Thirdly, to ensure ethical use of DTOs, they require a robust framework of norms, rules, and values. All these aspects, we argue, remain often-overlooked amid the current “twin rush.” To address these aspects and the overall lack of empirical social science research on the development and use of digital twins, the ERC project TwinPolitics at the University of Vienna re-conceptualizes DTOs as a socio-technical relation shaped by specific institutional, political, and economic conditions within a hybrid environment of research, data, and observation. TwinPolitics seeks to unpack the emergence of so-called “digital twin politics” in international environmental governance by tackling key questions: How and why are DTOs developed by governments and utilized in marine scientific research? How are they designed to inform decision-making? To what extent are they, or could they be, integrated into multilateral governance? This presentation introduces the project’s innovative methodology to track the development of DTOs across multiple field sites, policy levels, and spatial scales. By addressing this critical research gap, TwinPolitics aims to provide valuable insights into the role of digital practices in contemporary data-driven policymaking, fostering more equitable and responsible implementations of DTOs. TwinPolitics will transform our understanding of data-driven policy making by producing fine-grained analyses and visualisations of the socio-technical making of digital twins, and of how they can serve multilateralism.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K1)

Presentation: CITYNEXUS: Empowering Sustainable Urban Development through Digital Twin Technology

Authors: Mr Ludovico Lemma, Dr Simone Fratini, Dr Alessandra Feliciotti, Dr Mattia Marconcini, Mr Francesco Asaro, Mr Josselin Stark, Mr Andreas Altenkirch, Dr Claudia Vitolo
Affiliations: MindEarth s.r.l., Solenix Engineering GmbH, European Space Agency
Destination Earth (DestinE), a flagship initiative jointly promoted by the European Commission and the European Space Agency (ESA), is transforming the use of Earth Observation (EO) data to generate actionable insights for sustainable urban planning. As part of the DestinE ecosystem, CITYNEXUS, an advanced urban digital twin use case developed for the city of Copenhagen, has been tailored to address the specific needs and requirements of Copenhagen Amager Vest district, which is currently facing a key moment of transition towards healthier and sustainable mobility. CITYNEXUS aims to address critical urban challenges, including traffic congestion, air pollution, and equitable land use planning, by deploying advanced simulation technologies. The platform integrates cutting-edge AI models with a diverse range of data sources, including Earth Observation (EO) data, ground-based environmental observations, together with High-Frequency Location-Based (HFLB) mobility data, or else anonymized, high-frequency geolocation data collected from mobile devices, which provides detailed insights into population movements and traffic patterns, enabling dynamic urban modeling and improved decision-making. To monitor air quality and pollutant emissions, CITYNEXUS exploits Sentinel-5P TROPOMI Level 2 data to track pollutants such as NO₂, CO, O₃, and SO₂, alongside meteorological parameters from ECMWF ERA5 reanalysis data, including temperature, precipitation, wind, and solar radiation. These EO datasets are further enriched with CORINE land cover data and the Copernicus Digital Elevation Model (DEM), facilitating detailed spatial and environmental analyses. Ground-based air quality observations from the Danish Environmental Protection Agency monitoring network provide high-resolution, real-time validation for the EO-derived pollutant data, ensuring robust and actionable insights. The platform further incorporates historical datasets from Google's Environmental Insights Explorer, collected before the COVID-19 pandemic, which serve as valuable baseline measurements for urban emissions and mobility trends. To dynamically model the relationship between urban mobility and pollutant distribution and intensity, CITYNEXUS integrates a Deep Gravity Model, a sophisticated deep learning framework able to predict origin-destination flows by analyzing the interplay between spatial factors such as population density, land use, and infrastructure connectivity, serving both as a baseline for existing conditions and as a foundation for simulating changes under different scenarios. Complementing this framework is SUMO (Simulation of Urban MObility), which uses the output of the Deep Gravity Models to simulate vehicular traffic patterns, congestion levels, and associated emissions. The combined approach provides a comprehensive understanding of urban dynamics, enabling detailed analysis of traffic and its environmental impact. To ensure accuracy, the models are validated using traffic camera counts from the City of Copenhagen, aligning simulated outputs with observed traffic flows and ground conditions. The platform’s unique strength lies in its ability to empower users to configure and explore “what-if” scenarios tailored to the needs of Copenhagen’s Amager Vest district. Policymakers and urban planners can simulate road closures, introduce tunnels, adjust speed limits, and redefine land use distributions across residential, commercial, and industrial zones. CITYNEXUS also allows users to modify traffic compositions, such as increasing the proportion of bicycles or electric vehicles, and analyze the effects of these changes across different temporal settings, including specific time slots or weekdays versus weekends. Outputs from these simulations include pollutant concentrations for NO₂, CO₂, PM₁₀, and PM₂.₅, as well as metrics for traffic congestion, fuel consumption, and noise pollution. These results are presented through dynamic, interactive maps, enabling stakeholders to visualize and compare the impacts of various interventions in a risk-free virtual environment. Explainable AI (XAI) is a cornerstone of CITYNEXUS, enhancing transparency and usability by providing clear explanations for simulation results and actionable recommendations for optimization. In its first stage, the XAI module quantifies the environmental impacts of user-defined modifications, such as the reduction in NO₂ levels resulting from a road closure or speed limit adjustment. In the second stage, it suggests strategies to mitigate adverse outcomes or amplify positive effects, enabling stakeholders to make informed, evidence-based decisions. The XAI module also generates intermediate outputs, such as mobility trajectories, which provide deeper insights into traffic and pollutant dispersion patterns, fostering trust and confidence in the platform’s predictions. Developed in close collaboration with stakeholders, CITYNEXUS reflects the specific needs of the Amager Vest district. This diverse area, characterized by its mix of residential, commercial, and institutional spaces, faces significant challenges from high-speed thoroughfares that disrupt connectivity and exacerbate air quality issues. Working closely with the Local Council of Amager Vest, CITYNEXUS has been instrumental in exploring transformative interventions, such as tunneling major roads and reallocating reclaimed space for green areas or residential development. These initiatives align with Copenhagen’s ambitious goals to achieve carbon neutrality by 2025 while enhancing urban livability and resilience. Beyond its immediate application in Amager Vest, CITYNEXUS demonstrates scalability and adaptability for broader urban contexts. Its integration with the DestinE Service Platform (DESP) ensures robust computational performance and seamless data handling, making the platform transferable to other cities facing similar challenges. Engaging with regional and international networks like ICLEI, EUROCITIES, and the Global Covenant of Mayors, CITYNEXUS extends its impact, fostering collaboration and knowledge exchange to drive sustainable urban transformation. Furthermore, its alignment with the EU Mission “100 Climate-Neutral and Smart Cities by 2030” positions CITYNEXUS as a replicable model for cities across Europe and beyond. The technical foundation of CITYNEXUS ensures scientific rigor and operational relevance. Its integration of EO data with deep learning models like the Deep Gravity Model and SUMO enables precise, high-resolution simulations that capture the complex interplay between mobility, infrastructure, and environmental factors. Validation efforts, including cross-referencing outputs with traffic camera data and ground-based air quality observations, further reinforce the platform’s accuracy and reliability. By addressing urban challenges with precision, transparency, and adaptability, CITYNEXUS exemplifies the transformative potential of digital twin technologies in achieving sustainable urban futures. CITYNEXUS represents a paradigm shift in urban planning, combining innovative data integration, advanced modeling, and user-centric design within a collaborative framework. It operationalizes the synergies between the ESA DTE and DestinE ecosystems to address pressing urban challenges, offering a scalable, technically robust, and impactful solution. As a flagship use case, CITYNEXUS highlights the potential of digital twins to empower cities worldwide in their pursuit of sustainability and resilience.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F1)

Session: C.03.11 Sentinel-1 Mission: Sentinel-1C In-Orbit Commissioning Phase Results and beyond

Sentinel-1 is the is space radar observatory of the Copernicus Programme. Its a constellation of two polar-orbiting satellites carrying a C-band synthetic aperture radar as the main payload. Sentinel-1 mission started in 2014 with the launch of the first unit (the ""A"" unit), followed by the second unit (the ""B"" unit) two years afters. Sentinel-1 ambitions to provide free and open radar backscatter over two decades. To this aim, the first two units will be gradually be replaced by the C and D recurrent units.

The replenishment of the constellation will start in 2024 by the launch of the long awaited Sentinel-1C units and will continue later iin 2025 with Sentinel-1D. Sentinel-1C is expected to be lauched in Q4 2024 with the Vega-C Return to Flight. The in-orbit commissioning phase will last 4 months with the ambition to have the S/C operated at its full capacity soon after.

This session will present the activities and results achieved during the commissioning phase in terms on instrument performance, calibration and validation. It will also present the new capabilities offered by the new specific AIS payload carried by Sentinel-1. First results of the usage of Sentinel-1C achieved during and after the commissioning will be also addressed.

Presentations and speakers:


Return to a 6-Day-Repeat Sentinel-1 Constellation: An Overview of the Sentinel-1C In-Orbit Commissioning


  • Tobias Bollian - ESA

S1C Elevation and Azimuth Pointing Verification during the Commissioning Phase using Data and Antenna Model


  • Beatrice Mai - Aresys

Introduction to the Sentinel-1 AIS Payload; Commissioning and Performance Results


  • Stefan Graham - ESA

InSAR Methods and Preliminary Results for Sentinel-1C In-Orbit Validation


  • Marco Manzoni - PoliMi

DLR’s Independent Calibration of the Sentinel-1C System – First Results from S1C Commissioning Phase Activities


  • Patrick Klenk - DLR
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Session: F.04.03 Desertification, land degradation and soil management

Desertification and land degradation poses a major threat to food security, ecosystem services and biodiversity conservation. Soil is not a renewable resource when viewed on a time scale of a couple of decades and it is threatened worldwide by climate change, natural hazards and human activities. The consequences are an increased soil loss due to wind and water erosion, land-slides, reduced soil quality due to organic matter loss, contamination and soil sealing. The EU Soil monitoring law on the protection and preservation of soils aims to address key soil threats by sustainable soil use, preservation of soil quality and functions. Space-based earth observation data together with in-situ measurements and modelling can be used in an operational manner by national and international organizations with the mandate to map, monitor and repot on soils. With the advent of operational EO systems with a free and open data policy as well as cloud-based access and processing capabilities the need for systematic, large area mapping of topsoil characteristics with high spatial resolution that goes beyond recording degradation processes can be addressed

We encourage submissions related to the following topics and beyond:
- Advanced earth observation-based products to monitor desertification and land degradation at a large scale
- Specific earth observation-based methods for soil related topics such as soil parameter mapping. Soil erosion mapping as well as other soil related health indicators in different pedo-climatic regions and biomes.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: High Resolution Land Degradation Neutrality Monitoring – Achievements of the ESA SEN4LDN Project

Authors: Carolien Toté, Dr. Ruben Van De Kerchove, Daniele Zanaga, Giorgia Milli, Cai Zhanzhang, Lars Eklundh, Katja Berger, Martin Herold, Nandika Tsendbazar, Panpan Xu, Gabriel Daldegan, Marc Paganini
Affiliations: VITO, University of Lund, GFZ, Wageningen University, Conservation International, ESA-ESRIN
The 2030 Agenda for Sustainable Development is fundamentally based on 17 Sustainable Development Goals (SDG) which are targets agreed upon by the UN members regarding various interlinked objectives that must be ensured to achieve sustainable development. Diminished overall productivity and reduced resilience in the face of climate and environmental change, have made addressing land degradation a global priority formalized by the United Nations Convention to Combat Desertification (UNCCD) and the SDG. To this end, the 2030 Agenda for Sustainable Development defined target 15.3 of SDG 15, called ‘Life on Land’, that strives to reach Land Degradation Neutrality (LDN) by 2030. Efficient monitoring of Land Degradation (LD) requires constant monitoring of various biophysical and biochemical characteristics of the land. These disturbances range from rapid land cover change (e.g., fire or logging) to continuous and slower degradation of soil and land quality. While monitoring these at larger scale becomes a logistical impossibility if not using Earth Observation (EO) data, there are still several challenges and opportunities to address particularly related with increasing spatial and temporal resolution and diversity of sensor types. The European Space Agency (ESA) Sentinels for Land Degradation Neutrality (SEN4LDN) project aimed to address these two limitations by developing and showcasing a novel approach for improving both the spatial and temporal resolution of the data required for LD monitoring. While LDN is agreed between the SDG signatories, each region/country has its own specific challenges and drivers of LD and therefore the inclusion of local partners in the product development was extremely important. Therefore, SEN4LDN engaged with 3 pilot countries – Colombia, Uganda and Portugal – to participate to the project as early adopters. These stakeholders provided insights on the user requirements and feedback on the final product and its actual usability for SGD 15.3.1 reporting and have been actively engaged in the projects through three iterative rounds of Living Labs. The SEN4LDN national demonstration products consist of a series of output products on the three sub-indicators of land degradation as defined by the UNCCD – trends in land cover, trends in land productivity, and trends in carbon stocks – and a combined integrated LDN indicator. Trends in land cover between 2018 and 2023 are evaluated based on an automated algorithm to map land cover dynamics at 10 m resolution that combines deep learning and a pixel classifier on pre-processed Sentinel-2 imagery and ancillary input layers. Post-processing is performed to mitigate class fluctuations, resulting in consistent annual land cover maps. Land cover probabilities are used to generate land cover transition (probability) layers, that are further processed to discrete and continuous land cover degradation products. The land cover and land cover transition maps were validated against independent reference datasets. Overall accuracies of the land cover map in the three demonstration countries ranged between 69.6%±5.5% and 90.1%± 3.4%, and the LC transition map achieved an overall accuracy of 73.7% validated with a ground reference dataset collected by Uganda experts. To evaluate trends in land productivity, the seasonal accumulated production of green biomass is estimated from a Sentinel-2 derived index, which is an indicator for photosynthetic activity and overall ecosystem functionality. The trend of vegetation productivity is estimated for the period 2018-2023 at 10 m spatial resolution. The performance of vegetation productivity is based on comparison of the local productivity to similar land units over a larger area. Discrete and continuous land productivity degradation maps are generated based on the combination of the former two. Validation of these products is based on internal consistency analysis and indirect validation with external datasets. The concept of carbon stocks in terms of LDN assessments is primarily related to the soil carbon pool and related changes. However, since soil organic carbon (SOC) stock change estimates from remote sensing are not readily available (yet), SEN4LDN explored the use of above-ground biomass (AGB) changes as a proxy for carbon stock changes to provide an estimate independent of the other two sub-indicators. Two approaches were combined to quantify trends in carbon stocks: a stock change approach based on ESA CCI biomass maps, and a gain-loss approach based on a carbon flux model. Results from the hybrid approach estimate AGB evolution between 2010 and 2018/20 at 100 m spatial resolution have been generated for three countries as a feasibility assessment. Finally, the outputs of the trends in land cover and trends in land productivity sub-indicators are integrated to generate a product that allows to calculate the extent of land degradation for reporting on UN SDG indicator 15.3.1, expressed as the proportion (percentage) of land that is degraded over total land area. In SEN4LDN two methods were tested: (i) the so-called one-out-all-out in which a significant reduction or negative change in any one of the sub-indicators is considered to comprise land degradation, and (ii) a continuous sub-indicator integration method that combines the continuous land cover degradation and land productivity degradation products into a continuous land degradation probability index. This allows for a more in depth interpretation of the combined product, including an assessment of the magnitude or probability of degradation and restoration, and for an interpretation of possible contrasting sub-indicators. SEN4LDN has shown that there is great interest in LDN indicators at high spatial and high temporal frequency, and that these can be delivered from Sentinel input data. Country engagement and interactions through various rounds of Living Labs have been essential and instrumental to understand the potential and limitations of the different datasets generated. All three data streams (land cover/change, productivity, carbon stocks) have been found useful and should now be operationalized and integrated with national reference data in LDN monitoring frameworks.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Enhancements to the European soil organic carbon monitoring system Worldsoils

Authors: Prof. Dr. Bas van Wesemael, Dr. Asmaa Abdelbaki, Prof. Dr. Eyal Ben-Dor, Prof. Dr. Sabine Chabrillat, Dr. Pablo d'Angelo, Prof. Dr Jose A.M. Dematte, Dr. Giulio Genova, Dr. Asa Gholizadeh, Dr. Uta Heiden, Dr. Paul Karlshoefer, Dr. Robert Milewski, Dr. Laura Poggio, Dr. Marmar Sabetizadeh, Adrián Sanz, Dr. Peter Schwind, Dr. Nikolaos Tsakiridis, Prof. Dr. Nikolaos Tziolas, Dr. Julia Yagüe Ballester, Dr. Daniel Žižala
Affiliations: GMV Aerospace and Defence S.A.U., Earth and Life Institute, Université Catholique de Louvain, Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Fayoum University, Tel Aviv University. Porter School of Environment and Earth Science, Leibniz University of Hannover, Institute of Earth System Science, Department of Soil Science, German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), University of São Paulo. Luiz de Queiroz College of Agriculture, Department of Soil Science, University of Life Sciences Prague, Aristotle University of Thessaloniki. Laboratory of Remote Sensing, Spectroscopy, and GIS, Department of Agriculture, University of Florida. Southwest Florida Research and Education Center, Department of Soil, Water and Ecosystem Sciences, Institute of Food and Agricultural Sciences, ISRIC - World Soil Information
The EU Soil Monitoring Law, proposed on July 5, 2023, incorporates the use of Copernicus data to enhance the monitoring of Soil Organic Carbon (SOC). The law mandates that Member States utilize satellite data from the Copernicus program to complement ground-based measurements thus providing a more comprehensive and accurate assessment of SOC/lay ratio across regions, ensuring consistent and reliable data for soil health monitoring. In this context, the ESA funded Worldsoils project (Contract No. 400131273/20/I-NB, 2020-24) has developed a pre-operational SOC monitoring system in a cloud environment capable of: (i) predicting topsoil organic carbon content at regional and continental scales from Earth Observation (EO) satellite data with a continuous cover over Europe, (ii) leveraging upon multitemporal soil-spectral data archives and modelling techniques and, (iii) consulting with end users and EO experts for developing soil indices, relevant for monitoring topsoil. This abstract summarizes the results obtained in the first version of the system and presents the enhancements to be implemented in the second version, during 2025. Soil/land cover types analysed included croplands, grasslands and forests. The system utilized spectral models for croplands and a digital soil mapping approach for permanently vegetated areas (i.e.: grasslands and forests, although forests were not included in the independent validation). Models strongly rely on soil reflectance composites from the Sentinel 2 multispectral instrument. The composites provide the median reflectance for all valid pixels over a period of three years in the main growing season (from March to October). The bare soil frequency, a proxy for the degree of crop cover during the growing season, is lower in Mediterranean regions, with extensive cover of winter cereals and fodder crops during the growing season. Key outcomes of Worldsoils v1 include: • A graphical user interface that provides the SOC content and the 90% uncertainty ratio for 50 m pixels in three pilot regions (Wallonia, Central Macedonia and the Czech Republic) and 100 m pixels for the rest of Europe. https://gui.world-soils.com/. • Evidence that the SOC prediction remains stable, as expected for the short three-year period. • The reasonably good performance of the SOC prediction algorithms compared to others at continental scale (R²: 0.41 for croplands and 0.28 for permanently vegetated areas). • The pixels are accurately attributed to one of the two SOC prediction models (i.e. spectral vs digital soil mapping), except for tree crops in Macedonia -Mediterranean regions-. • Predicted SOC contents were evaluated against independent data sets from the National Reporting Centers on Soils in the three pilot regions • The evaluation of the SOC prediction is satisfactory in Wallonia (Belgium; R² 0.51) but is hindered by the limited SOC range in croplands in Greece and the Czech Republic. • The bare soil frequency is lower in Greece because of abundant tree crops, cereals and fodder crops. • The monitoring system reproduces spatial patterns in SOC content like those obtained from detailed regional algorithms using new generation hyperspectral satellites. Worldsoils v1 SOC prediction results and their evaluation over the land covers tested in Europe, although promising, seek expansion to a global scale to an enhanced version of Worldsoils v2. For this reason, during 2025 the following objectives are sought: (i) to improve the models and SOC products over extended areas in America, Africa and Asia; (ii) to provide a SOC/clay ratio model over Europe that leverages satellite data, (iii) to achieve the production and validation of v2 Worldsoils SOC maps, (iv) to transfer the results, tools and algorithms to ESA’s Application Propagation Environment (APEx) and (v) to import the soil compositing processor into an APEx compliant service implementation. It is foreseen that at the time of the ESA LPS 2025, the team can report on Worldsoils v1 results, and V2 specifications of indexes, models, data and system design as well as the implementation of the system, i.e.: coding of indexes and models, training and testing results, cloud system adjustments and improvements to the graphical user interface.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Leveraging Artificial Intelligence and Earth Observation for Accessible, Predictive Soil Management Insights

Authors: Nikos Tziolas, Anastasia Kritharoula, Giannis Gallios
Affiliations: Department of Soil, Water and Ecosystem Sciences, Institute of Food and Agricultural Sciences, University of Florida
As agricultural systems face growing anthropogenic and environmental pressures, effective soil management is essential for sustainable food production. Recent advancements in digital soil mapping, primarily driven by the integration of cloud computing and Artificial Intelligence (AI) in Earth Observation (EO) data analysis, have opened new possibilities. These innovations have significantly improved the efficiency and accuracy of monitoring soil health, providing better tools for managing and preserving soil resources in response to environmental challenges. However, there are fundamental limitations in current processes. Despite the availability of several spatial products, users often find it difficult to access these maps for informed decision-making related to soil management. This is primarily due to several factors: non-experts often struggle to interact effectively with complex geospatial systems, which require specialized knowledge; access to EO data and generated products can be slow and cumbersome; and many existing tools lack user-friendly interfaces that facilitate quick and intuitive exploration of geospatial information. In this work, we introduce GAIA (Geospatial Artificial Intelligence Analysis), a cutting-edge AI conversational platform developed to simplify and enhance soil management through a chat-based interface. GAIA integrates multispectral Sentinel-2 data with AI-driven predictive models, using convolutional neural networks to estimate key soil properties like Soil Organic Carbon (SOC), pH, cation exchange capacity (CEC), and clay content. Subsequently, the proposed approach focuses on utilizing Large Language Models (LLMs) to transform general stakeholder inquiries about soil health into specific, actionable research questions. By fine-tuning LLMs, we aim to automate the interpretation of natural language queries into technical questions related to soil health indicators, enhancing geospatial data retrieval. This process allows stakeholders, like growers, to easily access and analyze soil health data without deep technical knowledge. The system has been developed through training open-source LLMs, integrating expert guidelines, and enhancing the GAIA platform for efficient, real-time data retrieval. For instance, a grower might inquire, “I want to monitor the health state of my soil in my field to optimize crop production,” and the system would respond by presenting maps and statistics of key soil properties. GAIA system provides actionable insights without requiring users to have specialized technical knowledge. By leveraging AI’s predictive capabilities, it enhances the accuracy of soil health assessments and simplifies the interpretation of complex geospatial data through natural language queries, making the process more accessible and efficient for non-experts. GAIA's early findings demonstrate its ability to optimize soil sampling, and deliver real-time insights on soil conditions, enhancing decision-making in agriculture. Florida has been selected as a demonstration site, showcasing the system's practical applications. Its user-friendly interface enabling chat could potentially empower farmers, land managers, and policymakers to access critical data swiftly, allowing timely responses to environmental challenges. By integrating AI with geospatial data, GAIA bridges the gap between complex analytics and practical uses, supporting sustainable farming practices and revolutionizing how stakeholders interact with soil health data.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Earth Observation as a Tool for Monitoring and Reporting on SDG Indicator 15.3.1

Authors: Brian O’Connor, Coleen Carranza, Sara Minelli, Barron
Affiliations: United Nations Convention to Combat Desertification
The United Nations Convention to Combat Desertification (UNCCD) is the custodian agency for SDG Indicator 15.3.1 defined as the proportion of degraded land over total land area. SDG Indicator 15.3.1 is the sole indicator used to measure progress towards SDG Target 15.3 which strives to achieve a land degradation-neutral world by 2030. UNCCD continues to be a significant ‘consumer’ and advocate of Earth Observation (EO) data for sustainable development and land. Since the launch of the SDG Indicator 15.3.1 reporting process in 2018, the UNCCD has accumulated valuable insights into the use of EO data at the science-policy interface both in terms of the opportunities afforded and where countries experience the greatest challenges. The UNCCD is also a broker of EO data for its 196 country Parties to access free and open global data sources on the three sub-indicators of SDG Indicator 15.3.1: trends in land cover, trends in land productivity or functioning of the land and trends in carbon stocks above and below ground. Through provision of free and open global EO data for these sub-indicators as ‘default’ data for national reporting by the UNCCD, countries can report even when access to national datasets is limited. This ‘default data’ model has been very successful with 115 national estimates of degraded land reported in 2022 and 118 reported in 2018. In this presentation, UNCCD will focus on the following key aspects of Earth Observation as a tool for monitoring and reporting on SDG Indicator 15.3.1: (i) How EO has been a game changer for informing evidence-based policy messages on the status and trends in land degradation globally and regionally (ii) How the use of EO is challenging in certain regional geographies hampering the ability of countries in hyper arid regions and small island developing states in particular to report effectively (iii) Challenges faced by countries in using EO data at the national level (iv) Recommendations to EO data providers to enable more countries to report on SDG Indicator 15.3.1, set evidence-based land degradation neutrality voluntary targets and track progress towards their achievement.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: Advancing Soil Organic Carbon Monitoring and Modeling with Hyperspectral Earth Observation: Insights for Policy and Practice

Authors: Robert Milewski, Kathrin J Ward, Asmaa Abdelbaki, Pia Gottschalk, Marta Gómez Giménez, David de la Fuente Blanco, Asa Gholizadeh, Judith Walter, Robert Müller, Albrecht Bauriegel, Sabine Chabrillat
Affiliations: GFZ Helmholtz Centre for Geosciences, GMV Aerospace and Defence, Czech University of Life Sciences Prague, Landesamt für Bergbau, Geologie und Rohstoffe (LBGR), Dezernat Bodengeologie, GFZ Helmholtz Centre for Geosciences & Leibniz University Hannover
Soils are essential for food production and ecosystem services, storing approximately 30% of global terrestrial carbon. Accurate mapping and monitoring of soil properties are critical for assessing soil quality and achieving the goals of policies like the European Directive on Soil Monitoring and Resilience, which aims for 100% healthy soils in Europe by 2050. Among the key indicators suggested by the EU Soil Monitoring Law is the ratio of soil organic carbon (SOC) to soil clay content, which relates the carbon present in the soil to its storage potential. This metric is particularly relevant for addressing soil degradation in agricultural contexts. Spectroscopy in the VNIR-SWIR (400–2500 nm) spectral range has proven highly effective for the precise determination of SOC and clay content, utilizing narrow absorption features that are diagnostic of clay minerals. This approach also overcomes the limitations of broadband sensors by improving the differentiation of complex soil compositions and surface conditions, such as soil moisture, vegetation cover, and soil sealing. Current hyperspectral spaceborne sensors like EnMAP and PRISMA offer a unique opportunity to test and refine soil mapping capabilities, paving the way for global soil monitoring with forthcoming missions like ESA’s Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and NASA/JPL’s SBG. This research leverages advanced hyperspectral capabilities to explore innovative methods for SOC monitoring and modeling, supporting evidence-based policymaking. Under the EU-funded MRV4SOC project, hyperspectral data from EnMAP, PRISMA, and airborne HySpex sensors (plane and UAV) are utilized to estimate SOC, clay content, and carbon stocks for a demonstration site in NE Germany (Demmin). By integrating multitemporal hyperspectral soil property mapping and Sentinel-2 time-series data used in an agro-crop model and the RothC carbon process model, the research delivers accurate assessments of SOC/clay ratios and insights into carbon dynamics influenced by climate conditions and farming practices. Seasonal carbon inputs, derived from end-of-season dry biomass estimates using Sentinel-2 data, enable dynamic evaluations of land management practices, such as cover cropping and low-tillage farming. For the state of Brandenburg, SOC and soil texture maps were generated in collaboration with the Landesamt für Bergbau, Geologie und Rohstoffe Brandenburg (LBGR). This effort, based on over 170 cloud-free EnMAP scenes collected over 2.5 years, produced high-resolution maps that are vital for agricultural and environmental management. Meanwhile, hyperspectral UAV surveys provided finer-scale insights into field heterogeneity, offering a complementary perspective to satellite EO data. These case studies underscore the operational potential of hyperspectral EO missions in delivering actionable insights into soil health and carbon dynamics. They also highlight the critical synergies between current missions like Sentinel-2, EnMAP, and PRISMA, and future missions like CHIME, which promise enhanced spectral, spatial, and temporal resolutions. This research emphasizes the transformative role of hyperspectral EO data in bridging the gap between science and policy. By refining methodologies and generating robust indicators of soil degradation, it directly supports EU environmental initiatives such as the Soil Monitoring Law and the Common Agricultural Policy. The integration of hyperspectral data into advanced models and decision-support tools showcases the indispensable role of EO in monitoring and mitigating environmental degradation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.61/1.62)

Presentation: High resolution soil property maps and their uncertainty for Europe

Authors: Dr. Laura Poggio, Dr Uta Heiden, Dr Pablo Angelo, Dr Paul Karlshoefer, Fenny van Egmond
Affiliations: Isric - World Soil Information, German Aerospace Center
High-resolution, reliable soil data is crucial for addressing climate change and sustainable land management. Integrating high resolution remote sensing data, such as from Copernicus Sentinel, is essential for improving accuracy and relevance. This study presents an overview of our Digital Soil Mapping (DSM) approach and its innovations. We combine satellite imagery, environmental covariates (e.g., elevation, weather data), and ground truth observations (e.g., LUCAS and other European and national datasets) to create high-resolution soil property maps using statistical models. These maps encompass primary properties (e.g., organic carbon, pH, texture), derived properties, and soil health indicators. We used the Soil Composite Mapping Processor (SCMaP) to derive soil reflectance composites from Sentinel-2 time series. These composites aid in identifying bare soil areas and estimating their spectral reflectance, spectral dynamics and frequency of occurrence, serving as a proxy for land management. Random Forest models, in particular Quantile Random Forests for uncertainty assessment, are employed to predict soil properties. This study delves into the advantages and challenges of using high-resolution remote sensing data with limited ground truth data. We also provide insights into product uncertainty assessment at a continental scale, including accuracy, spatial patterns, and user evaluation. We focus in particular on the relevance of finer resolution and accuracy for continental products.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Session: C.02.07 FORUM- ESA's 9th Earth Explorer

The FORUM mission will improve the understanding of our climate system by supplying, for the first time, most of the spectral features of the far-infrared contribution to the Earth’s outgoing longwave radiation, particularly focusing on water vapour, cirrus cloud properties, and ice/snow surface emissivity. FORUM’s main payload is a Fourier transform spectrometer designed to provide a benchmark top-of-atmosphere emission spectrum in the 100 to 1600 cm-¹ (i.e. 6.25 to 100 µm) spectral region filling the observational gap in the far-infrared (100 to 667 cm-¹ i.e. from 15 to 100 µm), which has never been observed from space, spectrally resolved, and in its entirety. The focus of this session is on the scientific developments in the frame of this mission and the outlook into the future.

Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: FORUM Science: Current status and future plans

Authors: Prof Helen Brindley
Affiliations: Imperial College London, National Centre for Earth Observation
The far infrared (far-ir: defined here as wavelengths between 15 - 100 μm) plays a pivotal role in determining the Earth’s energy balance with, in the global mean, approximately half of our planet's emission to space occurring within this wavelength range. Despite this, the Earth’s outgoing far-ir radiation spectrum has never been systematically measured from space. ESA’s 9th Earth Explorer, the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission will change this, opening a new window on climate science by measuring, for the first time, the Earth’s outgoing energy spectrum across the far-ir with high spectral resolution and unprecedented radiometric accuracy. The dominant role of the far-ir in determining the Earth’s Outgoing Longwave Radiation (OLR) is in part due to the strong water vapour rotation band at wavelengths > 16.5 μm. This in turn means that radiative emission in the far-ir is particularly sensitive to water vapour in the climatically important upper troposphere/lower stratosphere (UTLS) region. Similarly, clear-sky longwave radiative cooling through the mid and upper troposphere is dominated by the contribution from the far-ir. FORUM observations offer the potential to both improve our knowledge of UTLS water vapour concentrations and couple them to their associated radiative impact. While for much of the globe this water vapour absorption means that far-ir surface emission cannot be sensed from space, in dry, clear-sky conditions this is no longer the case. As water vapour concentrations reduce, micro-windows in the far-ir become progressively more transmissive such that the surface emission in these regions of the spectrum can propagate to space. Recent studies show that properly accounting for the contribution of surface emissivity in the far-ir may be critical to both reduce persistent climate model biases and determine the pace of high-latitude climate change. Moreover, ice clouds, crucial players in determining current and future climate, have emitting temperatures that place the peak of their radiative emission within the far-ir. Our ability to correctly simulate the interaction of the radiation spectrum with ice cloud relies on our capability to adequately represent their macrophysical and microphysical properties. The latter are critically dependent on the complex ice-crystal shapes and their size distributions within the clouds. Recent advances in ice cloud optical modelling have attempted to capture their bulk microphysical properties spanning the entire electromagnetic spectrum. However, while there are many space-based observations of the reflected visible and emitted near- and mid-infrared radiation in the presence of ice clouds that can be exploited to test these developments, there are no such observations that span the far-ir. This represents a major barrier to improving our confidence in our ability to understand and monitor ice cloud properties and their interaction with the Earth’s outgoing longwave energy, particularly since the contrast in ice and water refractive indices between the far-ir and mid-infrared implies that unique information relating to cloud classification and microphysics can be leveraged from measurements of the far-ir spectrum. Dedicated studies are ongoing to understand exactly how FORUM can benefit these areas and to prepare the tools needed to fully exploit the observations. In this talk I will summarise these efforts, providing a high-level overview of recent and planned scientific activities in support of the mission.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: Principal Component Analysis of Infrared Spectra for the Evaluation of Climate Model’s Variability: Application to IASI and ARPEGE-CLIMAT

Authors: Lucie Leonarski, Pascal Prunet, Claude Camy-Peyret, Sarah Pipien, Quentin Libois, Romain Roehrig
Affiliations: Spascia, Institut Pierre-Simon Laplace, Centre National de Recherches Météorologiques / Météo-France
Climate models are key tools for understanding the past and present climate, and to perform climate projections. For decades, those models have been validated against broadband measurements of the radiative budget from space, for instance with the Clouds and the Earth’s Radiant Energy System (CERES) instrument. Even if the latter has proven its utility, the use of broadband measurements can potentially hide spectral error compensation. To avoid this, spectrally-resolved radiative fluxes could be used. The spectrally-resolved spectra, especially in the infrared (IR), contain valuable information about the climate system, such as the spatial and temporal variability of temperature, water vapour, clouds and other atmospheric constituents, that could be used to investigate climate models deficiencies through systematic model/observation comparison. Such measurements can be obtained from spaceborne instruments like the Infrared Atmospheric Sounding Interferometer (IASI) series that measure with high spectral resolution the infrared spectrum between 645 and 2760 cm-1 and will be completed by the future FORUM satellite mission (100-1600 cm-1). Comparing high-resolution spectra can be complex. However the increasing amount of satellite data used by the Earth observation community has raised the interest in compression methods based on Principal Component Analysis (PCA). Beyond the data compression, this statistical method has also been successfully employed for noise-reduction, instrument artefact removal, and extreme atmospheric events detection and monitoring like fires, volcanoes, pollution episodes. Putting forward the principal directions of the variability (eigenvectors), the PCA represents an objective tool for analysing the climate variability. In this study, we have used the PCA to compare the spatial and temporal variability of the IR spectra modeled by the ARPEGE-Climat climate model to the one captured by IASI. The outputs of a 7-year amip simulation are used with the RTTOV radiative transfer code to produce synthetic clear-sky spectra. Eigenvectors generated from monthly averaged model spectra and IASI measurements are compared using the canonical principal angles. Spatial distribution of the principal components as well as their annual cycle are investigated. An approach is proposed for the geophysical interpretation of eigen vectors to extract useful information from the model/observation comparison about the climate model deficiencies in terms of thermodynamical variables. Potential for the evaluation and intercomparison of climate models is discussed in the perspective of the FORUM and IASI-NG missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: W-band, HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing (WHAFFFERS): multi-frequency, multi-platform campaign overview

Authors: Natalia Bliankinshtein, Cuong Nguyen, Keyvan Ranjbar, Paloma Borque, Leonid Nichman, Kenny Bala, Yi Huang, Lei Liu, Benjamin Riot Bretcher, Eve Bigras, Helen Brindley, Jonathan Murray, Zen Mariani, Jean-Pierre Blanchet, Yann Blanchard, Adrian Loftus, Marco Barucci, Claudio Belotti, Giovanni Bianchini, Luca Palchetti, Silvia Viciani, Laura Warwick, Hilke Oetjen, Dirk Schuettemeyer
Affiliations: National Research Council Canada, McGill University, Imperial College London, Environment and Climate Chance Canada, Université de Québec à Montréal, NASA Goddard Space Flight Center, CNR National Institute of Optics, European Space Agency
Far-infrared measurements from space at present constitute a measurement gap, which will be addressed by a number of new and planned space missions, in particular, ESA’s Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, NASA’s Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE) and Canada’s Thin Ice Cloud in the Far InfraREd (TICFIRE) instrument on NASA’s Atmosphere Observing System (AOS) satellite mission. Airborne measurements with a far-infrared sensor would provide valuable information for advancement of these missions. WHAFFFERS (W-band, HiSRAMS, AERI, FIRR-2, FINESSE and FIRMOS Experiment on Remote Sensing) is a multi-platform, multi-frequency field campaign planned for January-February 2025 in Canada. The campaign will include data collection at two ground-based sites, one in Ottawa, Ontario and one at Gault reserve near Montreal, Quebec, as well as flight data collection by NRC Convair-580, timed with overpasses of PREFIRE satellites. WHAFFFERS research objectives include radiative closure experiments in microwave, infrared and far-infrared bands, observations of snow and ice surface emissivity, as well as synergy of active and passive sensors for atmospheric retrievals. National Research Council Canada’s Convair-580, responsible for airborne data collection, is a twin-engine, pressurized turbo-prop aircraft equipped with a state-of-the-art suite of atmospheric probes. In-situ measurements include basic aircraft state and atmospheric state variables, as well as cloud bulk and microphysics parameters and aerosols. Remote sensors onboard the aircraft include NRC airborne W- and X-band radars, Radar-Enhanced W-band Airborne Radiometer Detection System (REWARDS), 355nm elastic cloud lidar and the High Spectral Resolution Airborne Microwave Sounder (HiSRAMS). In addition to the typical suite of airborne instruments, The Far-InfraRed Radiometer-2 (FIRR-2) is a passive ground-based instrument that belongs to Environment and Climate Change Canada (ECCC). FIRR-2 is an innovative atmospheric radiometer that provides radiometrically-calibrated data in eight spectral channels in the range of 360-1265 cm-1. Measurements are enabled with the use of a far infrared detector based on microbolometers developed by the Institute of Optics of Canada (INO). This new technology is compact, low-cost, and can operate remotely 24/7 with out any human intervention. In addition to the post-processed temperature and moisture profiles, FIRR-2 can also provide cloud microphysics information on Arctic Thin Ice Clouds (TIC). This instrument was modified by its manufacturer and redesigned for nadir-viewing onboard the Convair-580 for the WHAFFFERS campaign. The ground-based component of the campaign is two-fold. Primary sensors for the campaign are McGill University’s extended Atmospheric Emitted Radiances Interferometer (AERI), 425-3000 cm-1, and Imperial College’s Far Infrared Spectrometer for Surface Emissivity (FINESSE), 400-1600 cm-1, deployed at the Gault site, as well as National Institute of Optic’s Far-Infrared Radiation Mobile Observation System (FIRMOS), 100-1600 cm-1, deployed near the Ottawa International Airport. Secondary observations include a specialized surface observation site at the Ottawa location, deployed by ECCC and McGill University. Additionally, two Climate Sentinels network stations operated by McGill University and Université du Québec à Montréal will collect critical surface data to complement the campaign measurements. With four sensors measuring at far-infrared frequencies, WHAFFFERS aims at applications that benefit from integrated sensing by building synergies between the ground and airborne measurements with simultaneous overpasses of the PREFIRE satellite mission for clear conditions and in the presence of thin clouds. This presentation will provide an overview of the WHAFFFERS campaign and a first look at the data collected through the campaign period.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: FORUM mission development status

Authors: Kotska Wallace, Paolo Laberinti, Felice Vanin, Michael Miranda, Dulce Lajas, David Buehler, Cedric Salenc, Hilke
Affiliations: European Space Agency
The Far-infrared Outgoing Radiation Understanding and Monitoring mission, FORUM, is being developed within FutureEO, ESA’s Earth Observation research and development programme. It will deliver spectrally resolved measurements of the Earth’s emission spectrum in the far infrared, with continuous spectral coverage, to help scientists understand and quantify Earth’s radiative processes. In 2022 the contract for FORUM space segment was signed with an Airbus led consortium. This presentation will describe the development status of the instrument and platform subsystems and preparation of the mission and ground segment. FORUM’s science payload consists of two instruments. The nadir pointing FORUM Sounding Instrument (FSI), developed by OHB, is based on a Fourier-transform spectrometer hosting a double pendulum interferometer as the core sub-unit. For calibration purposes its view can be directed to a cold space view or to the absolute reference of a floating temperature blackbody unit, whose temperature is precisely measured. The interferograms produced by the interferometer are imaged using pyroelectric detectors. The FORUM Embedded Imager (FEI) is being developed by Leonardo, for scene heterogeneity assessment. It is a long wave infrared camera which uses a microbolometer array to form a 36 by 36 km image with 600m ground sample resolution. Several component and subsystem level tests are available at the end of 2024. The spectrometer Instrument Development Model has been assembled for performance characterisation of radiometric sensitivity, including the detector assembly, optical components and the alignment. An Engineering Qualification Model of the Interferometer Mechanism Assembly has also been used to conduct synchronisation tests to de-risk the electro-functional interface for the interferogram acquisition, checking the timing performance of the Scan Unit, Pointing Unit and Interferometer Mechanism Assembly. A performance model for the FSI has been used to generate performance predictions, based on components and system level data. The first modules of the FORUM end to end performance simulator that includes a prototype of the data processor have been delivered. Elaboration of the payload data ground segment preliminary design is underway. The Instrument Critical Design Review was held in 2024 and a launch contract was signed with Vega-C. FORUM System CDR will take place in the first half of 2025.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: EE9 FORUM L1/ L2 data processors and E2E simulations: development of test data to prepare for FORUM launch

Authors: Bram Sanders, Hilke Oetjen, Dulce Lajas, David Buehler, Kotska Wallace
Affiliations: ESA - ESTEC
The FORUM mission aims to measure the Earth’s emission spectrum in the far infrared spectrally resolved and with continuous spectral coverage. The wavelength range from 100 to 1600 cm-1 or 100 to 6.25 mum is measured with a resolution better than 0.5 cm-1, which enables retrieval of UTLS water vapour, high-altitude ice cloud properties and surface emissivities in polar regions. Another geophysical data product is TOA spectral flux intended specifically for time series analysis and climate model benchmarking. FORUM therefore targets to measure radiances with a high absolute radiometric accuracy better than 0.1 K in terms of brightness temperature. FORUM carries two instruments: an imager dedicated to scene heterogeneity assessment and a Fourier-transform spectrometer measuring in nadir. The latter instrument samples every 100 km a FOV with a diameter of 15 km. Spectral fluxes are derived from nadir radiances using atmospheric profiles retrieved from the same measurement. The FORUM satellite will fly in loose formation with IASI-NG on Metop-SG and both FORUM stand-alone and synergy L2 products with IASI-NG are foreseen. The primary benefit of their combination is the extension of the spectral range into the mid infrared to complete coverage of the outgoing longwave radiation spectrum. In this presentation we will introduce ESA’s development approach to the FORUM End-to-End performance Simulator (FEES) and will discuss the FORUM L1 and L2 data products. The FEES is a comprehensive software tool to support the development of the FORUM prototype processors and the FORUM ground segment implementation by embedding them in a closed-loop simulation environment that includes the FORUM Instrument Simulator Modules (ISMs) for the spectrometer and the imager and a sophisticated Scene Generator Module (SGM). In addition, a dedicated FEES module simulates the IASI-NG spatial collocation and synergy retrievals. Finally, we will discuss the status of the FEES for the generation of test data for the user community to prepare for the availability of the operational FORUM data for scientific applications.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall E1)

Presentation: Improvement of the Long-term Traceability to the SI of the FORUM on-board Blackbody

Authors: Lars Bünger, Bruno Rohloff, Dirk Fehse, Max Reiniger, Daniela Narezo Guzman, Christian Monte
Affiliations: PTB
Ensuring the traceability of outgoing longwave radiation (OLR) measurements and achieving the lowest uncertainties during the entire duration of missions is crucial for the new generation of earth observation satellites and for the inter-mission comparability. To maintain very high accuracy in radiometric measurements in the infrared spectral range, regular checks/recalibrations of the on-board spectrometers are required. The calibration concept of the FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission is based on an optimised and precisely characterised on-board blackbody. This paper describes the implementation of a strategy to maintain high standards of temperature measurement of the on-board blackbody over the duration of the FORUM mission. The strategy consists of enhancing the reliability and accuracy of the blackbody contact temperature measurement by using additional sensor technology besides the previously chosen Pt2000 type. Having two sets of sensors allows the identification of sensor-to-sensor variation within the same set and set-to-set variation possibly caused by different aging processes among sensor types. Identifying these variations improves the reliability and accuracy of the blackbody temperature. The design of the flight blackbody was finalized prior to the selection of additional sensor technology. Thus, compatibility of the additional sensors with pre-defined electrical instrumentation was required. In addition, the geometric dimensions of these sensors had to be comparable to or smaller than the Pt2000 sensors to allow the usage of Pt2000 sensors if no suitable candidate could be found. Four different types of thermistors (NTC) were pre-selected as suitable candidates for the additional implementation. Three of these were epoxy encapsulated and space qualified, and one type was glass encapsulated and not space qualified. After the initial plausibility, stability, reproducibility and self-heating measurement campaign of the 95 sensors, two thermistor types were excluded. The two remaining types, including the glass-encapsulated type, were further analyzed. Three sensors of each pre-selected NTC type, together with two Pt2000 sensors were mounted in a specially developed test plate based on the design of the emitting backplate of the flight black body. This simulated the mechanical integration into the satellite's blackbody, including the various adhesives and cover plates. Furthermore, the test plate was geometrically designed in such a way that it could be integrated into various PTB measurement setups. The sensors integrated into the test plate were investigated for effects of self-heating, vibration, thermal cycling, thermal vacuum, mechanical shock and vibration. The remaining NTC sensors of the two preselected types were subjected to thermal cycling and characterized for self-heating and hysteresis. Finally, the glass-encapsulated type NTC was recommended as additional sensor type for the FORUM on-board blackbody. A group of selected sensors were calibrated and fully characterized with respect to repeatability, reproducibility, drift and residual errors (hysteresis effects) and provided to the integrator of the on-board blackbody. Furthermore, another outcome of the project was the recommendation to modify the on-board electronics to provide varying sensing currents. This enables the on-board measurement of the self-heating effect. By monitoring the self-heating in regular intervals, the thermal contact of the sensors can be checked for stability. This recommendation will also be implemented in the flight hardware. Financial support of this work by the ESA project Novel Reference/Calibration System to Measure Spectral Radiance on the Range 4 μm to 100 μm - CCN1 is gratefully acknowledged.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Session: A.07.07 Advancements in Observation of Physical Snow Parameters

Comprehensive quantitative observations of physical properties of the seasonal snow cover are of great importance for water resources, climate impact and natural hazard monitoring activities. This has been emphasized by the Global Energy and Water EXchanges (GEWEX) (a core project of the World Climate Research Programme (WCRP)) for decades and highlighted by the Global Precipitation Experiment (GPEX) (a WCRP Light House Activity) launched in October 2023. Satellite-based observation systems are the only efficient means for obtaining the required high temporal and spatial coverage over the global snow cover. Due to the sensitivity to dielectric properties and penetration capabilities, SAR systems are versatile tools for snow parameter observations. Significant advancements have been achieved in SAR based retrieval algorithms and their application for operational snow cover monitoring. Additionally, lidar backscatter measurements has been proven to provide accurate observations on snow height and its changes. However, there is still need for improvement of snow cover products, addressing physical parameters such snow depth, SWE, liquid water content, freezing state and snow morphology. In this session the current status of physical snow cover products will be reviewed and activities towards further improvements will be presented, taking into account satellite data of current and future satellite missions. In this context, a broad range of observation techniques is of interest, including methods based on backscatter intensity, polarimetry, interferometry, tomography, as well as multi-frequency and multi-sensor approaches.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Snow Water Equivalent (SWE) Retrieval Algorithms Based on Volume Scattering Approach from Dual Frequency Radar Measurements.

Authors: Firoz Borah, Prof. Leung Tsang, Dr. Edward Kim, Dr. Michael Durand
Affiliations: University Of Michigan, NASA Goddard Space Flight Center, The Ohio State University
In this paper we describe two retrieval algorithms of snow water equivalent (SWE) based on the volume scattering of snow at X (9.6 GHz) and Ku (17.2 GHz) bands. The significance of the algorithms is that neither a prior on grain size nor scattering albedo are required - an important advancement, as accurate estimates of grain sizes are difficult to obtain, especially on a global basis. The two algorithms are validated with 4 sets of airborne data and 3 years of tower-based time series measurements. In the algorithms, the rough surface scattering from the snow/soil interface is first subtracted to obtain the net volume scattering of the SWE. The physically-based bi-continuous DMRT model was used to generate a look up table of backscattering coefficients as functions of several snow parameters. Using the lookup table, the model was further parametrized and the parameterized model gives the X and Ku band co-polarization backscatter as a pair of equations in terms of two parameters: SWE and scattering albedo at X band (ωX). We present a critical analysis of the solution space, and illustrate the domains within the solution space where SWE may be uniquely estimated. We show that the uniqueness is further improved by using time series radar measurements. The robustness of the no-prior approach was validated with airborne observations, by using a prior SWE value that is intentionally far (75% different) from the true SWE. Validation using tower-based data was also conducted, using time series observations from the NoSREx experiment in Sodankyla, Finland. In this case, the SWE of the previous time step is used to choose between the two solutions for the current time step. The cost function thus uses the previously retrieved SWE within the cost function, initialized with a time series starting from a snow-free condition. Recently we have also performed full wave simulations of rough surface scattering of the snow/soil interface up to rms heights of 1.5 wavelengths. This is 5 times larger than the previous upper limit of 0.3 wavelength. The results can be used for L band, C band, X band and Ku band. The model results improve the accuracy of rough surface scattering subtraction of the SWE retrieval algorithms. These advances in snow retrieval algorithms were central factors in a recent global SWE mission concept being rated “selectable” for the first time by either NASA or ESA.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Sensitivity Analysis of Snow-Parameter Retrieval by Means of Tomographic Profiling Using KAPRI

Authors: Esther Mas Sanz, Dr. Marcel Stefko, Othmar Frey, Irena Hajnsek
Affiliations: Institute of Environmental Engineering, Swiss Federal Institute of Technology (ETH), GAMMA Remote Sensing AG, Microwaves and Radar Institute, German Aerospace Center (DLR)
Snow is a key cryosphere parameter covering about a 31% of the land surfaces [1] and is one of the major drivers of our planet’s climate. It has a direct influence on fresh water reservoirs, sea level variations and Earth’s energy budget, to name a few. Furthermore, snow is known to have a direct impact on the climate by affecting the planet’s albedo [2]. Given the importance of snow to better understand both past, present and future Earth’s climate, there have been many efforts within the radar community to characterize the backscattered signal of the snow pack and to establish links to specific properties such as density, grain size, crystal structure or layering for several decades (e.g. [3]). Depending on these specific properties of the snow pack and the frequency of the electromagnetic waves different penetration depths will be achieved. In the case of dry snow, it is expected that the electromagnetic waves can penetrate up to a few meters even at high frequencies [4]. Radar tomographic profiling is an imaging technique that allows to reconstruct the vertical profile of snow from multibaseline interferometric acquisitions [5]. This technique has a clear advantage over conventional methods for snow profiling, e.g. snow pit observations, as first, it does not destroy the snow pack and second, provides a complete image of the interest area (and not a single data point). The literature shows that, over the last decade, there has been a growing interest for tomographic profiling experiments using ground-based systems to fully characterize snow covered areas (e.g. [6],[7]). In this experiment, we have acquired tomographic radar data of snow on the Aletsch glacier using KAPRI, the Ku-band Advanced Polarimetric Radar Interferometer, a real-aperture frequency-modulated continuous-wave (FMCW) ground-based radar operating at a central frequency of 17.2 GHz based on a polarimetric version of the Gamma Portable Radar Interferometer (GPRI) [8-10]. The experiment setup consists of two GPRI units deployed on a pair of exterior terraces of the Jungfraujoch High Altitude Research Station with a vertical separation of 4.2 meters. One of the units acts as the primary device, with transmission and reception capabilities while the other unit acts as a secondary device, as passive receiver. The effective perpendicular baselines vary substantially with the incidence angle. This, in combination with the dependency on range, leads to varying unambiguous heights and tomographic resolutions from near to far range. The aim of this experiment is twofold. First, investigating the capabilities of bistatic KAPRI for retrieving snow layering in a glacier environment. Second, if height resolution is favorable, investigating the snow pack signatures derived from the snow vertical profiling. [1] Tsang, L. et al. (2022). Review article: Global monitoring of snow water equivalent using high-frequency radar remote sensing. Cryosphere, 16(9), 3531–3573. https://doi.org/10.5194/TC-16-3531-2022 [2] ESA - Snow grain size – it matters. (n.d.). Retrieved July 27, 2023, from https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-3/Snow_grain_size_it_matters [3] Mätzler, C. (1987). Applications of the interaction of microwaves with the natural snow cover. Remote Sensing Reviews, 2(2), 259–387. https://doi.org/10.1080/02757258709532086 [4] Rignot, E., et al. (2001). Penetration depth of interferometric synthetic-aperture radar signals in snow and ice. Geophysical Research Letters, 28(18), 3501–3504. https://doi.org/10.1029/2000GL012484 [5] Griffiths, H. D., & Baker, C. J. (2006). Fundamentals of Tomography and Radar. NATO Security through Science Series A: Chemistry and Biology, 171–187. https://doi.org/10.1007/1-4020-4295-7_08 [6] Tebaldini, S., et al. (2013). High resolution three-dimensional imaging of a snowpack from ground-based SAR data acquired at X and Ku Band. International Geoscience and Remote Sensing Symposium (IGARSS), 77–80. https://doi.org/10.1109/IGARSS.2013.6721096 [7] Frey, O. et al. (2023). Analyzing Time Series of Vertical Profiles of Seasonal Snow Measured by SAR Tomographic Profiling at L/S/C-Band, Ku-Band, and Ka-Band in Comparison With Snow Characterizations. International Geoscience and Remote Sensing Symposium (IGARSS), 754–757. https://doi.org/10.1109/IGARSS52108.2023.1028302z [8] Werner, C., et al. (2008). A real-aperture radar for ground-based differential interferometry. International Geoscience and Remote Sensing Symposium (IGARSS), 3(1), 210–213. https://doi.org/10.1109/IGARSS.2008.4779320 [9] Baffelli, S., et al. (2018). Polarimetric calibration of the ku-band advanced polarimetric radar interferometer. IEEE Transactions on Geoscience and Remote Sensing, 56(4), 2295–2311. https://doi.org/10.1109/TGRS.2017.2778049 [10] Stefko, M., et al. (2022). Calibration and Operation of a Bistatic Real-Aperture Polarimetric-Interferometric Ku-Band Radar. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–19. https://doi.org/10.1109/TGRS.2021.3121466
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Modelling polarimetric Ku- and L-band synthetic aperture radar observations of snow-covered Arctic terrain using airborne CryoSAR instrument data and field measurements

Authors: Richard Kelly, Jeffrey Welch, Dan Kramer, Alex Langlois, Gabriel Hould Gosselin, Nick Rutter, Christian
Affiliations: University Of Waterloo
The Ku-band backscatter from snow at 13.5 GHz is strongly influenced by the snow water equivalent (SWE) of accumulated dry snow. The physical connection between SWE and the Ku-band backscatter response is generally understood. However, there is significant interest in the development of methods to retrieve SWE from Ku-band SAR observations when coupled with snowpack microstructure data that represent the status of a multi-layered snowpack. Although the quantification of the snow microstructure can be approximated generally using models of snow physics such as SNOWPACK, direct in situ measurements of microstructure provide an opportunity to force microwave electromagnetic models, such as the snow microwave radiative transfer (SMRT) model, with observed field data that is un-smoothed and represents the high spatiotemporal variability of snow microstructure. SMRT model estimates and their behaviour over a range of observed states can then be compared with the SAR observations thereby testing the model skill. This paper describes an experiment to compare SAR backscatter data with SMRT estimates of backscatter using field-observed snow microstructure measurements at Trail Valley Creek, Northwest Territories, and at Cambridge Bay, Nunavut. Wintertime snow microstructure data were acquired at these sites when Ku- and L-band polarimetric airborne SAR observations were made. The snowpack at Cambridge Bay is generally moderate to shallow with snow thicknesses ranging from 70 cm to 30 cm. Trail Valley Creek snow sites were unusually thicker than normal in 2024 with significant accumulations across the landscape and especially in the river gulleys. Ku-band co-polarized data demonstrate a positive correlation with the snow thickness although it is moderated by the microstructure layering (layer snow grain specific surface area and density). The L-band response is generally unaffected by the snow but responds to the sub-surface roughness scattering and volume scattering. This research has important implications for SAR retrievals of SWE using Ku-band and L-band polarimetric radar sensors. The Terrestrial Snow Mass Mission is a dual-frequency Ku-band system in early planning stages designed for SWE estimation. To achieve its goal, the Ku-band instruments will be used to estimate SWE and snow microstructure, while it plans to leverage C-band SAR (such as Sentinel-1 or Radar Constellation Mission) to characterize the freeze-thaw and roughness state of the underlying soil. L-band data from NISAR or the planned Radar Observing Mission for Europe in L-band (ROSE-L) mission will have the capacity to quantify the sub-nivean soil roughness and freeze-thaw status probably more fully by virtue of its longer wavelength and, therefore, deeper soil penetration. The CryoSAR instrument, therefore, provides an opportunity to test the multi-frequency responses to the snow-covered terrestrial environment to support ongoing planned Ku- and L-band missions for snow mass estimation.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Addressing spatiotemporal challenges of InSAR Snow Water Equivalent retrieval using MultiChannel and Maximum A Posteriori estimators

Authors: Jorge Jorge Ruiz, Manu Holmberg, Juha Lemmetyinen, Ioanna Merkouriadi, Anna Kontu, Jouni Pulliainen, Jaan Praks
Affiliations: Finnish Meteorological Institute, Aalto University
The retrieval of changes in Snow Water Equivalent (ΔSWE) using repeat-pass Interferometric Synthetic Aperture Radar (d-InSAR) can be achieved by the simple inversion of the interferometric phase [1]. Furthermore, little ancillary data is required, making it an interesting approach for high-resolution monitoring of seasonal snow accumulation. However, snow-covered surfaces are highly affected by decorrelation due to the fast-changing nature of the snow [2,3,4]. Environmental effects such as wind [2,4], snow melt between passes [5,6], or air temperature [2,3,6], have been linked to increased decorrelation. This often results in noisy interferograms where heavy spatial filtering needs to be applied, hindering detection of the spatial accumulation. Low frequencies are less prone to suffer from these effects, and consequently, are regarded as more suitable for the d-InSAR retrieval technique as can conserve coherence for longer temporal baselines. However, low frequencies lack precision due to the long wavelength, and small deviations (either systematic or stochastic) in the interferometric phase can translate into errors of several millimeters in ΔSWE. Conversely, high frequencies are more affected by temporal decorrelation but offer higher retrieval precision. Furthermore, high frequencies are increasingly affected by the interferometric phase exceeding 2π for typical temporal baselines, causing lost phase cycles in the signal [7]. MultiChannel (MCh) techniques refer to the combination of multiple, statistically independent, measurements to improve accuracy, and reduce ambiguities and retrieval noise [8]. By exploiting the statistics of the measurements, MCh allows the formulation of a Maximum Likelihood Estimator (MLE) that enables the robust estimation of parameters. Furthermore, spatial constraints can be also applied in Maximum A Posteriori (MAP) estimators, accounting for the contextual information. This can be done e.g., by favoring similarity among nearby pixels. Additionally, the use of Markov Random Fields has been proposed to deal with nosy interferogram in a MCh context [9]. In the upcoming years, the availability of SAR satellites will increase dramatically [10] and new opportunities and mission concepts enabling multi-frequency SAR will emerge. For example, NISAR (the new NASA-ISRO SAR) will account with L- and S- bands, ESA's ROSE-L (L-band) satellites may orbit alongside the existing Sentinel-1 (C-band), or TSMM from the CSA will account for a dual frequency Ku-band radar. This makes MCh an interesting technique to improve retrievals. To support the investigation, we made use of SodSAR [11], a SAR tower-based system with InSAR capabilities, located in Northern Finland. The radar performed a measurement every 6 hours at L-, S-, C-, and X- bands over a non-vegetated area. Four configurations of MCh were considered: L- and S- bands, L- and C- bands, C- and X- bands, and all bands. We investigated the effect of temporal baseline of the retrieval. For each temporal baseline, all possible pairs from the dataset's image were generated and ΔSWE retrieved. The retrieved ΔSWE was then compared to an in-situ snow scale. Results indicate a r²>0.8 for up to 20 days of temporal baseline. To evaluate the MAP estimators, we performed simulations using semi-synthetic data based on an ALOS2 L-band interferometric pair and coincidental SnowModel simulations [12]. The coherence from ALOS2 was used along the ΔSWE from SnowModel to generate noisy interferometric phase maps. To generate the coherence for the other frequency bands, we degraded the coherence from ALOS2, and generated noisy interferometric phase maps for L-, S-, C-, and X-bands. We performed the inversion of ΔSWE at pixel level and by locally fitting planes over the neighboring pixels. We included the prior of ΔSWE as a mixture of normal distributions and investigated the implementation of MRF based on Local Gaussian Markov Random Fields (LGMRF). Both priors are controlled by hidden variables, or hyperparameters, which need to be estimated from the noisy data. To solve the optimization problem, we employed the Monte Carlo Expectation Maximization (MCEM) algorithm. A Metropolis-Hasting algorithm was used to generate samples from the posterior distribution, which were directly used for the hyperparameter estimation. The result from the inversion of these techniques was compared to the same SnowModel map. The joining of local planes and the normal distribution provided the best solution in terms of RMSE and Pearson correlation while keeping the computational burden relatively simple. The LGMRF provided good results at the cost of an increased computational load. [1]. T. Guneriussen, K. A. Hogda, H. Johnsen, and I. Lauknes, “Insar for estimation of changes in snow water equivalent of dry snow,” in IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120), vol. 2, 2000, pp. 463–466 vol.2. [2]. J. J. Ruiz, J. Lemmetyinen, A. Kontu, R. Tarvainen, R. Vehmas, J. Pulliainen, and J. Praks, “Investigation of environmental effects on coherence loss in sar interferometry for snow water equivalent retrieval,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022. [3]. S. Leinss, A. Wiesmann, J. Lemmetyinen, and I. Hajnsek, “Snow water equivalent of dry snow measured by differential interferometry,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 8, pp. 3773–3790, 2015. [4]. S. Oveisgharan, R. Zinke, Z. Hoppinen, and H. P. Marshall, “Snow water equivalent retrieval over idaho, part a: Using sentinel-1 repeat-pass interferometry,” The Cryosphere Discussions, vol. 2023, pp. 1–19, 2023. [Online]. Available: https://tc.copernicus.org/preprints/tc-2023-95/ [5]. Hoppinen, Z., Oveisgharan, S., Marshall, H.-P., Mower, R., Elder, K., and Vuyovich, C.: Snow water equivalent retrieval over Idaho – Part 2: Using L-band UAVSAR repeat-pass interferometry, The Cryosphere, 18, 575–592, https://doi.org/10.5194/tc-18-575-2024, 2024. [6]. J. Jorge Ruiz, I. Merkouriadi, J. Lemmetyinen, J. Cohen, A. Kontu, T. Nagler, J. Pulliainen, and J. Praks, “Comparing insar snow water equivalent retrieval using alos2 with in situ observations and snowmodel over the boreal forest area,” IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1–14, 2024. [7]. K. Belinska, G. Fischer, G. Parrella and I. Hajnsek, "The Potential of Multifrequency Spaceborne DInSAR Measurements for the Retrieval of Snow Water Equivalent," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 2950-2962, 2024, doi: 10.1109/JSTARS.2023.3345139 [8]. F. Baselice, G. Ferraioli, V. Pascazio, and G. Schirinzi, “Contextual information-based multichannel synthetic aperture radar interferometry: Addressing dem reconstruction using contextual information,” IEEE Signal Processing Magazine, vol. 31, no. 4, pp. 59–68, 2014. [9]. G. Ferraiuolo, V. Pascazio and G. Schirinzi, "Maximum a posteriori estimation of height profiles in InSAR imaging," in IEEE Geoscience and Remote Sensing Letters, vol. 1, no. 2, pp. 66-70, April 2004, doi: 10.1109/LGRS.2003.822882. [10]. R. Wilkinson, M. Mleczko, R. Brewin, K. Gaston, M. Mueller, J. Shutler, X. Yan, and K. Anderson, “Environmental impacts of earth observation data in the constellation and cloud computing era,” Science of The Total Environment, vol. 909, p. 168584, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0048969723072121 [11]. Jorge Ruiz, J.; Vehmas, R.; Lemmetyinen, J.; Uusitalo, J.; Lahtinen, J.; Lehtinen, K.; Kontu, A.; Rautiainen, K.; Tarvainen, R.; Pulliainen, J.; et al. SodSAR: A Tower-Based 1–10 GHz SAR System for Snow, Soil and Vegetation Studies. Sensors 2020, 20, 6702. https://doi.org/10.3390/s20226702 [12]. Liston, Glen E.; Elder, Kelly. 2006. A distributed snow-evolution modeling system (SnowModel). Journal of Hydrometeorology. 7(6): 1259-1276.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: High-resolution snow depth profiles from ICESat-2

Authors: Désirée Treichler, Marco Mazzolini, Simen Aune, Yves Bühler, Luc Girod, Zhihao Liu, Livia Piermattei, Clare Webster
Affiliations: Department of Geosciences, University of Oslo, WSL Institute for Snow and Avalanche Research SLF, Department of Geography, University of Zürich
To date, snow depth measurements require manual measurements or targeted field campaigns. The ATLAS sensor onboard the ICESat-2 satellite acquires profiles of surface elevation measurements of very high accuracy. When compared with an accurate digital elevation model (DEM) from snow-free conditions, this data directly translates into snow depth measurements. Previous studies used the spatially summarised ICESat-2 data products ATL06 and ATL08 with an along-profile resolution of 20-100 m. They found that ICESat-2 can provide average snow depth estimates at the watershed scale with decimeter-level uncertainties. Uncertainties and bias were found to strongly increase with slope and depend on the reference DEM accuracy and the quality of the co-registration of the datasets. This study uses the individual photon data product ATL03 that has an along-track resolution of 0.7 m and thus the potential to provide high-resolution snow depth profiles. Each overpass results in a profile pair separated by 90 m laterally and consisting of a weak and a strong beam with approximately four times as many return photons. We compare different photon filtering and co-registration methods for five field sites in Norway, Finland, and Switzerland including alpine terrain of different topography, sparse and dense forest. Given careful pre-processing, we find that ATL03 data can yield snow depth profiles with a few meters' spatial resolution that closely match profiles derived from reference snow depth maps from uncrewed aerial vehicles (UAVs) acquired within 5-10 days of an ICESat-2 overpass. The goodness of fit depends on the chosen filtering method which should be adjusted to the acquisition conditions and the presence of vegetation and snow on trees to yield optimal results. Dataset co-registration is crucial for steep terrain, and in some areas, the strong and weak beam have different spatial offsets that require separate co-registration for each beam. Given locally optimised pre-processing, the uncertainty (mean absolute deviation, MAD) for ICESat-2/UAV data pairs ranges from 0.05 m in a sparsely forested, flat site to 0.4 m in a very steep alpine site. Bias (median residual) ranges from a few centimetres to several decimetres. For most sites the snow depths retrieved from strong and weak beams are consistently offset, with a vertical difference of up to 25 cm between strong/weak snow depth profiles for the same filtering/co-registration method and site, suggesting inconsistencies in the geolocation of beams of the same pair in ATL03 data. Improved low-level processing on the data provider's side in future ATL03 data versions may reduce the differences between the beams. This study finds that, given accurate reference DEMs, ICESat-2 data can not only provide catchment-scale snow depth averages but also high-resolution snow depth profiles in areas where no measurements currently exist, showcasing the potential of spaceborne laser altimetry for providing high-resolution snow depth measurements globally.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.85/1.86)

Presentation: Ku-band Radar for Snow Water Equivalent (and other) Applications: Status of the Terrestrial Snow Mass Mission

Authors: Stephen Howell, Chris Derksen, Benoit Montpetit, Vincent Vionnet, Vincent Fortin, Courtney Bayer, Marco Carrera, Nicolas Leroux, Julien Meloche, Jean Bergeron, Fauve Strachan, Patrick Plourde, Ralph Girard, Roger DeAbreu, Shawn MacArthur, Richard Kelly
Affiliations: Environment And Climate Change Canada, Canadian Space Agency, Natural Resources Canada, University of Waterloo
Freshwater delivered by seasonal snow melt is of the utmost importance for the health and well-being of people and ecosystems across midlatitude, northern, and mountain regions, yet poses risks by contributing to costly and damaging flood events. The current lack of information on how much water is stored as snow (expressed as the ‘snow water equivalent’ or SWE), and how it varies in space and time, limits the hydrological, climate, and weather services provided by Environment and Climate Change Canada (ECCC). To address this knowledge gap, ECCC, the Canadian Space Agency (CSA), and Natural Resources Canada (NRCan) are working in partnership to implement a Ku-band synthetic aperture radar (SAR) mission presently named the ‘Terrestrial Snow Mass Mission’ – TSMM. A technical concept capable of providing dual-polarization (VV/VH), moderate resolution (500 m), wide swath (~250 km), and high duty cycle (~25% SAR-on time) Ku-band radar measurements at two frequencies (13.5; 17.25 GHz) is under development. Ku-band radar is a desirable approach for a terrestrial snow mass mission because these measurements are sensitive to SWE through the volume scattering properties of dry snow and can identify the wet versus dry state of snow cover. This presentation will provide an update on the mission status, including: (1) Implementation of computationally efficient SWE retrieval techniques, based on the use of physical snow modeling to provide initial estimates of snow microstructure which can effectively parameterize forward model simulations for prediction of snow volume scattering. (2) Testbed experiments facilitated by the recently developed TSMM simulator. (3) Analysis of airborne Ku-band radar measurements acquired across Canada with the ‘CryoSAR’ instrument operated by the University of Waterloo. (4) Advancements to the technical readiness and mission concept of operations (5) Key policy drivers which have anchored mission development, including ensuring resilient adaption to climate change, enhanced environmental prediction, and ensuring strategic water supply information is available to support industry in meeting future clean energy regulations in Canada.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 3

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Assessing the value of surface soil moisture products for the prediction of Spring Barley yield in Central Europe

Authors: Felix Reuß, Emanuel Buecchi, Dr. Mariette Vreugdenhil, Wolfgang Wagner
Affiliations: Department of Geodesy and Geoinformation, TU Wien
Spring Barley is the third most common crop in terms of cultivated area and total production in Europe. Its economic value and resilience to environmental stressors contribute to its importance in global food security and agricultural systems. It is used for food production, livestock feed, and the brewing industry and therefore plays a pivotal role in the food chain. Compared to other cereals, barley is considered a drought-tolerant crop type. Yet, proper water supply is still crucial for Spring Barley growth. A key variable indicating the water supply is soil moisture (SM). Various studies have already used SM products, either modelled or satellite-derived, as an input for yield prediction. However, a detailed assessment of the value of SM for Spring Barley yield prediction has not yet been carried out. The aim of this study is therefore to assess the value of two different SM products for the prediction of Spring Barley yield in Central Europe. In detail, it is evaluated, (1) How much of Spring Barley yield variability can be explained solely by SM? (2) How do yield prediction accuracies differ spatially? (3) How does Spring Barley yield prediction based on a modelled SM product compare to a satellite-derived SM product? This study was carried out within the ESA funded project Yield Prediction and Estimation from EO. Spring Barley yield on NUTS3 level for Germany and NUTS4 level for Austria, and Czechia, and the years 2016-2022 was used as a reference. The ASCAT based HSAF SM products for the top soil layer and the ERA5 soil water index level 1 (SWVL 1) product were used as predictors. From these datasets 14 daily minimum, maximum and mean soil moisture values were extracted. A Neural Network was used to train distinct models for the two SM products. All models were finally evaluated by calculating the metrics PearsonR, explained variability (R²), root mean square error, and the mean absolute error. The results indicate the following: 1) A high explained variability R² of 0.36 was achieved for ERA5 SWVL1. The HSAF product in contrast achieved an R² of 0.21. 2) Lowest accuracies are achieved in region with low average yields due to unfavourable conditions for Spring Barley, e.g. regions frequently affected by drought conditions. 3) The ERA5 product showed higher accuracies, especially in the alpine region, where the SM noise of the HSAF product is especially high. In summary, the study has shown, that SM based models can explain over one-third of the Spring Barley yield variability in Central Europe, thus underlining the high value of SM products for Spring Barley yield prediction. The ERA5 SWVL1 overall showed better performance and provides more accurate results, especially in mountainous regions. The results from this study can contribute to the development of more precise and robust yield prediction models not only for Spring Barley but also for other crops. Future work will focus on understanding the performance differences between the two SM products in more detail and assessing the performance in drought years.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Innovative Space Methods to Monitor Crop Diversity for Resilient Agriculture

Authors: Bingfang Wu
Affiliations: Aerospace Information Research Institute, Chinese Academy of Sciences
The pressing question for science remains: how to feed humanity on current croplands without increasing greenhouse gas emissions or destroying biodiversity and nature? Agriculture systems often face a trade-off between specialization and diversification. Specialization, with its focus on high-yield monocultures and economies of scale, offers efficiency and cost-effectiveness through volume but often leads to biodiversity loss, soil degradation, and vulnerability to pests and climate extremes. Moreover, traits such as drought tolerance, heat resistance, and high nutrient density, which are crucial for future agriculture, have disappeared from many modern commercial crops. On the other hand, diversification aims at economies of scope where efficiencies are formed by variety and not just by volume. It promotes crop diversity and nutritional diversity, enabling more stable yields, enhanced soil health and improved ecosystem resilience, but requires more complex management and may involve relatively lower productivity in the short-term compared to specialized systems. Nutritional composition and productivity differences among crop varieties, alongside their social-economic impacts, play a crucial role in agricultural sustainability. Resilient agricultural practices are aligned with the UN FAO’s “four betters”: better production, better nutrition, a better environment and a better life, and crop diversity plays a crucial role in achieving these goals. The need to preserve crop diversity at individual farm, national and global scales is more urgent than ever to safeguard genetic diversity, strengthen agroecosystem resilience, and ensure the security of global food supplies, nutrition and health. Therefore, deep understanding and knowledge together with effective monitoring of crop diversity is vital. Earth Observation (EO) offers unprecedented potential for the continuous monitoring of crop diversity, enhancing our understanding and knowledge thus supporting informed policy measures for resilient agriculture. However, technical challenges remain in attaining a cost-effective, reliable data retrieval system. High-resolution datasets, such as UAV hyperspectral and LiDAR data, combined with satellite integration, are promising for capturing fine-scale crop diversity, particularly in fragmented and heterogeneous farming systems. While new-generation satellite sensors enable high-resolution crop type mapping at continental scales, distinguishing crop varieties remains a challenge due to spectral similarities and mixed pixels, especially in smallholder landscapes. To overcome these challenges, advanced deep learning models and expanded ground sampling networks are essential. Additionally, EO-derived proxies for functional traits, such as leaf chlorophyll, grain micronutrient content, require standardized field protocols including trait-spectral libraries for calibration and validation. While UAV data hold promise for precision agriculture, a more effective data integration strategy is needed to scale small-scale UAV insights to regional or global levels. Moreover, innovative methods, open-access data policies, cost-effective technologies and capacity building for local stakeholders are critical to address the challenges of monitoring crop and cropping diversity in diverse climatic and farming contexts. Understanding the nexus of the crop diversity, dietary patterns and livelihoods, particularly, human health is essential for informing evidence-based policymaking that enhances agroecosystem resilience and promotes sustainable agriculture. Agriculture is an indispensable part of the rural economy in major parts of Asia and the Pacific, contributing 29 percent of GDP and 65 percent of all employment. Rice-growing areas in Southeast Asia, characterized by fragmented fields and diverse cropping practices, present unique challenges and opportunities for enhancing crop diversity and resilience. Selecting Southeast Asia as a focal region with pilot sites across different dietary cultures and climate zones, the initiative of “Promoting crop biodiversity through innovative space applications (CropBio)” was launched. This initiative employs co-designed, inter- & transdisciplinary approaches, aiming to establish a protocol for crop diversity monitoring and assessment, which includes a protocol of innovative methods to detect crop traits and varieties, and an assessment framework to evaluate the impact of crop diversity on the environment, economy, society, and livelihood such as health. The recommendation will be worked out to explore nature-based agricultural solutions addressing the global challenge of feeding humanity sustainably while showcasing the evidence of the importance of crop diversity.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Synergistic use of optical and SAR imagery for near real-time green area index retrieval in maize

Authors: Jean Bouchat, Quentin Deffense, Thomas De Maet, Pierre Defourny
Affiliations: European Space Agency (ESA), European Space Research Institute (ESRIN), Earth and Life Institute, Université catholique de Louvain
The green area index (GAI) is a key biophysical variable for crop monitoring, widely used to assess crop health, growth, and productivity. Most large-scale and cost-effective methods for estimating the GAI rely on optical remote sensing data; consequently, frequent cloud cover can severely limit their reliability. This challenge is ever more pressing in tropical regions, where timely vegetation monitoring is essential for food security and where cloud cover often overlaps with key phases of vegetation growth. In these instances, synthetic aperture radars (SARs) offer a valuable alternative, as their cloud-penetrating capabilities enable the generation of dense time series that can enhance the spatial and temporal coverage of optical data. In recent years, various methods have been proposed to address gaps in time series of biophysical variables retrieved from optical data, including fusion techniques that aim to reproduce or enhance optical imagery using ancillary Earth observation data to compensate for cloud cover. However, there have been relatively few efforts to systematically leverage dense SAR time series to directly fill gaps in GAI time series, despite the potential for reducing modeling errors and production time. In this study, a method is proposed to fill these gaps as they occur along the crop growing season with current and past SAR data as well as past GAI values. The focus on near real-time gap-filling ensures enhanced temporal resolution and timeliness, addressing critical needs in operational crop monitoring. The approach involves the use of a transformer encoder, a deep learning architecture that exploits the sequential nature of the values of the target variable and its complex relationship with SAR backscatter and interferometric coherence. Sentinel-1 and Sentinel-2 data acquired from 2018 to 2021 over the Hesbaye region of Belgium are used for cross-validation. The results demonstrate the robustness of the method. The model can successfully retrieve the GAI at the parcel level on an unseen growing season with a mean R² of 0,88 and RMSE of 0,71. External validation with in situ data collected from 10 maize fields in 2018 in Belgium further confirms its accuracy, outperforming traditional approaches based on Water Cloud model inversion. These promising results not only highlight the immediate applicability of this approach but also its potential for broader impact. While this study focused on maize—a high-biomass crop that has been shown to be challenging to monitor using C-band SAR—the method shows promise for extension to other major temporary crops. Additionally, future advancements incorporating multi-frequency SAR data, using the L-band data from forthcoming NISAR and ROSE-L missions, are anticipated to further enhance its performance. In the end, by enabling the generation of accurate and dense GAI time series throughout the crop growing season, this method has the potential to significantly advance the capability of operational crop monitoring systems in cloud-prone regions, where timely delivery of information on crop condition is critical for informed decision-making.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Monitoring best management practices using Earth Observation for improving estimates of greenhouse gas emissions and sinks from Canadian agriculture

Authors: Dr. Catherine Champagne, Dr. Emilly Lindsay, Dr. Bahram Daneshfar, Dr. Jiangui Liu, Dr. Jiali Shang, Dr. Andrew
Affiliations: Agriculture And Agri-food Canada
Agricultural land plays an important role in mitigating climate change. Agriculture and Agri-Food Canada maintains key national data sets derived from Earth observation on land cover and land use across the agricultural regions of Canada and is developing new information and models to expand our knowledge of changing agroecosystems. Understanding changes in yield and productivity, crop rotations and best management practices and changes in the transition between agricultural and other land uses is helping us better evaluate the environmental and economic performance of the agricultural sector and support resilience to climate change. While many methods to quantify key aspects of agricultural performance are found in the literature, developing robust methods that can reliably estimate these parameters over diverse agricultural regions and over long periods of time require adaptation and robust validation of methods. This presentation will cover how methods are being developed and adapted over pilot sites in Canadian agricultural regions to support measurement and verification of agricultural resilience indicators to support national greenhouse gas inventories. The work will cover methods development and validation in three sub areas: Landscape productivity, grassland and perennial rotations and parameterization of process based models using seeding and harvest dates. Seeding dates were derived using multi-temporal optical data from Sentinel-2, Landsat and MODIS were used to develop dense time series of seasonal data in different regions of Canada. A multi-parameter curve fitting function was used to estimate start, end and magnitude of canopy greenness. These data were integrated with reanalysis data and a biometeorological time scale model to estimate seeding dates. Results were evaluated over different regions of Canada with different crop calendars and crop mixes. Landscape productivity was estimated using multi-temporal optical data from Sentinel-2 and Landsat combined with a radiation use efficiency model to estimate net primary productivity. This was compared with published methods using leaf area index and normalized difference vegetation index methods to estimate relative productivity and crop yield to estimate carbon sequestration in agricultural canopies. Grasslands and perennial agricultural classes were evaluated using a time series of annual crop classifications derived from Landsat, Sentinel-2, Sentinel-1 and Radarsat-2 data. A field based crop rotation model was developed using image segmentation to classify rotations based on the frequency of perennial agricultural classes. A more detailed classification of native and seeded forage was developed for the Canadian Prairies using a multi-year time series of synthetic aperture radar and optical data to capture short and long term trends in land conversion in agricultural regions. This presentation will discuss how long term Earth Observation data sets are being integrated into robust Earth Observation data services to improve reporting of greenhouse gas emissions and discuss the challenges of method development for operational data services.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: Leveraging multi-year Sentinel-2 time series for mapping organic farmland

Authors: Jan Hemmerling, Dr. Marcel Schwieder, Dr Patrick Hostert, Dr. Stefan Erasmi
Affiliations: Thünen Institute of Farm Economics, Thünen Earth Observation (ThEO), Germany, Humboldt-Universität zu Berlin, Earth Observation Lab, Geography Department, Germany
Organic farming plays a crucial role in achieving a more sustainable agriculture, offering benefits such as reduced greenhouse gas emissions, improved soil health, and enhanced ecosystem services. These attributes align with the European Union's ambition to transition towards sustainable agriculture, as outlined in the European Green Deal. A key target is to bring 25% of agricultural land under organic management by 2030. An effective assessment of the drivers and impact of an extension of organic farmland at national scale requires comprehensive, spatially explicit digital data on organic agricultural land. However, such data remains scarce in many countries. We seek to address this gap by leveraging multispectral Sentinel-2 time series data to differentiate between organic and conventional farming practices. Organic farming is characterized by specific management practices, minimized use of pesticides and synthetic fertilizers, while simultaneously emphasizing mechanical weed control and the use of more diverse crop rotations, thereby supporting soil health and naturally reduced pest pressures. On the other hand, organic farming results in significantly reduced yield compared to conventional farming in cropland. Direct or indirect expressions of these differences in management offer potential for differentiation by the means of remote sensing, but so far, have not been explored. In this work we combine both, intra-annual and multi-annual remote sensing inherent multispectral features in order to differentiate between the two management systems. We analyze organic and conventional farming practices across seven German federal states, using extensive sampling. We use the Integrated Administration and Control System (IACS) of the Common Agricultural Policy (CAP) framework between 2018 and 2022 to establish a reference for permanently organically managed fields. We interpolate all available Sentinel-2 imagery with less than 75% cloud cover between 2018 and 2022 into equidistant time series with 10-day intervals at 10-meter spatial resolution. These data are then processed in a two-stage classification model. The first stage focuses on intra-annual management and phenology differences whose output is then utilized by the second stage, which is thereby enabled to identify crop rotation patterns indicative of organic practices. For both stages we use a Vision Transformer based approach which we compare against a Random Forest baseline-model. We evaluate how integration of multi-year data enhances classification accuracy compared to the application of annual time series. So far, our findings show that differentiation between organic and conventional farming is partly possible using one-year multispectral Sentinel-2 time series alone, whereby the differentiation accuracy between organic and conventional farming heavily depends on the cultivated crop type. While crops like winter wheat, winter rye and spring oat show best results with F1-scores above 0.8 indicating a good distinguishable, permanent grassland, hops and orchards show results with F1-scores below 0.2 indicating a poor distinction between organic and conventional farming. Future work will refine this approach by integrating multi-annual data to enhance mapping accuracy and expand the methodology for nationwide organic farming assessments. This research underscores the value of Sentinel-2 time series in supporting the EU’s sustainable agriculture goals.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall F2)

Presentation: ESTIMATING DAILY HIGH-RESOLUTION LEAF AREA INDEX (LAI) FOR WHEAT USING PLANETSCOPE DATA

Authors: Qiaomin Chen, Rhianna Mcaneny, Rhianna McAneny, Marie Weiss, Raul Lopez-Lozano, Jérémy Labrosse, Qiaomin Chen, Dan Smith, Mingxia Dong, Scott Chapman, Shouyang Liu, Alexis Comar
Affiliations: INRAE, Université d'Avignon, UMR EMMAH 1114, HIPHEN SAS, School of Agriculture and Food Sciences, The University of Queensland, Engineering Research Center of Plant Phenotyping, Ministry of Education, State Key Laboratory of Crop Genetics & Germplasm Enhancement and Utilization, Jiangsu Collaborative Innovation Center for Modern Crop Production, Academy for Advanced Interdisciplinary Studies, Nanjing Agricultural University, Agriculture and Food, CSIRO, Queensland Bioscience Precinct
The emergence of new commercial satellite constellations such as PLANETSCOPE enables the acquisition of satellite imagery with high temporal resolution (daily) and high spatial resolution (3 meters per pixel). However, they often suffer from inconsistencies in data calibration, which complicates the estimation of the Leaf Area Index (LAI). We show that PLANETSCOPE spectral sampling is not sufficient to reach the same accuracy as obtained with SENTINEL-2 and we investigate the added value of such constellation for wheat LAI estimations using different prior information (e.g. crop specific, soil specific) to compensate this effect. Our study relies on the BV-NNET algorithm designed for global level application over any vegetation type and is based on the inversion of the PROSAIL radiative transfer model using artificial neural networks. This algorithm was initially developed for SENTINEL-2 and has been adapted to align with PLANETSCOPE’s spectral and orbital characteristics. In order to train the neural network, we gathered a comprehensive LAI dataset from different phenotyping experiments including a variety of wheat genotypes, environmental conditions (France, China, Australia) and phenological stages. To address the differences between PLANETSCOPE and SENTINEL-2 regarding radiometric accuracy and the number of available bands, we compare two harmonization approaches by fitting linear regression (i) at the reflectance level and (ii) at the product level (LAI). We show that harmonizing at the product level is more efficient, leveraging the presence of SWIR bands in SENTINEL-2. We then explore how the estimation of LAI can be improved by developing a wheat specific algorithm that consists in adding prior information into BV-NNET (e.g. soil reflectance, vegetation biochemical content and joint distributions between those variables). We explore different scenarios for the prior information and show that with the appropriate joint distribution, the model performance in estimating LAI can be significantly improved. Finally, the best performance is achieved when combining the priori information with harmonization with SENTINEL-2 at the product level.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Session: F.04.06 Wetlands: from Inventory to Conservation

Wetlands are an essential part of our natural environment. They are scattered across the world in all bio-geographic regions, providing a range of critically important ecosystem services and supporting the livelihoods and well-being of many people. For much of the 20th century, wetlands have been drained and degraded.

The Ramsar Convention on wetlands is an intergovernmental treaty that provides the framework for national actions and international cooperation for the conservation and wise use of wetlands, as a means to achieving sustainable development. The 172 countries signatory to the convention commit, through their national governments, to ensure the conservation and restoration of their designated wetlands and to include the wise use of all their wetlands in national environmental planning.

Wetland inventory, assessment and monitoring constitute essential instruments for countries to ensure the conservation and wise use of their wetlands. Earth Observation has revolutionized wetland inventory, assessment and monitoring. In the recent years, the advent of continuous data streams of high quality and free of charge satellite observations, in combination with the emergence of digital technologies and the democratisation of computing costs, have offered unprecedented opportunities to improve the collective capacities to efficiently monitor the changes and trends in wetlands globally.

The importance of EO for wetland monitoring has been stressed by Ramsar in a recently published report on the use of Earth Observation for wetland inventory, assessment and monitoring.

The SDG monitoring guidelines on water related ecosystems (SDG target 6.6) also largely emphasize the role of EO, while the EO community is getting organised around the GEO Wetlands initiative to provide support to wetlands practitioners on the use of EO technology.

The Wetland session will review the latest scientific advancements in using Earth observations for wetland inventory, assessment, and monitoring to support effective wetland conservation. It will also discuss strategies for integrating Earth observations into the sustainable management of wetland ecosystems.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Empowering National Wetland Inventorying through Earth Observation

Authors: Christian Tottrup, Cécile Kittel, Alexander Kreisel, Adam Pasik, Anis Guelmami, Nina Bègue
Affiliations: DHI A/S, GeoVille, Tour du Valat
The Earth Observation for Wetland Inventory (EO4WI) is an application project funded by the European Space Agency (ESA) and aiming to empower countries with the capacity to independently leverage Earth Observation (EO) data and tools for national wetland inventorying. These inventories are critical for meeting the reporting requirements of multiple international frameworks, including the post-2020 Global Biodiversity Framework (GBF), the Ramsar Convention, SEEA Ecosystem Accounting, and the 2030 Agenda for Sustainable Development (e.g., SDG 6.6.1). EO4WI focuses on developing robust methodologies and flexible tools to maximize the utility of the recent surge in radar and optical satellite data availability, the development and increased accessibility of advanced machine-learning classification algorithms, and the continuous improvements in computational capacity and availability of cloud-based platforms. By aligning national wetland inventory processes with global policy requirements, the project addresses the dual needs of building local capacity and demonstrating a coherent data and ICT infrastructure capable of supporting national and regional wetland mapping efforts. The EO4WI project has been implemented in collaboration with national stakeholders (Early Adopters), regional Ramsar/NGO networks, and global domain experts to bridge the gap between local knowledge, in-situ data, and EO technologies. This inclusive approach aimed to harmonize data production and knowledge-sharing across geographic scales and scientific disciplines, ultimately fostering a better understanding of wetland processes and trends. EO4WI envisions a transformative impact on wetland management by equipping nations with the tools and data necessary to support multiple policy agendas. Furthermore, the project seeks to drive meaningful actions for the protection and restoration of critical wetland ecosystems by advancing large-scale wetland inventorying and ensuring that wetland-related data are integrated into global environmental decision-making frameworks. The aim of this presentation is to review the EO4WI implementation approach and present several country demonstrations showcasing how the EO4WI mapping solution has been used to enhance national wetland inventorying data and capacity, and thereby contributing to national efforts related to wetland conservation and restoration as well as to national monitoring and reporting obligations.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Global Mangrove Watch: Updated Global Mangrove Extent and Change 1990-2024

Authors: Dr Pete Bunting, Lammert Hilarides, Dr Ake Rosenqvist, Paula Castro Brandão Vaz dos Santos, Daniele Zanaga, Dr. Ruben Van De Kerchove, Jessica Rosenqvist, Nathan Thomas, Tom Worthington, Richard Lucas
Affiliations: Aberystwyth University, Wetlands International, soloEO, VITO, Edge Hill University, University of Cambridge
The Global Mangrove Watch (GMW) was initiated in 2011 as part of the Japan Aerospace Exploration Agency (JAXA) Kyoto & Carbon Initiative and is now undertaken as an independent project led by Aberystwyth University, Wetlands International, soloEO and The Nature Conservancy (TNC). The GMW team have produced several global mangrove extent and change products over the past 8 years (Bunting et al., 2018, 2022a,b, 2023), with each update improving the mapping methodology and accuracy of the products. The latest version 4.0 GMW products have a significantly revised and improved methodology, resulting in greater map accuracy. Improvements include a new baseline for 2020, with a spatial resolution of 10 m, and change products extending to 1990 (previously 1996). Understanding historical changes in the extent of mangrove forests is vital for effective restoration and conservation efforts. Accurately mapping these changes is essential, as regions that have experienced mangrove loss are frequently identified as potential sites for mangrove restoration and blue carbon projects. The GMW v3.0 change products have presented several limitations, mainly due to the previous misregistration of the global L-band SAR mosaic data, which were used as the basis for the change mapping. This misregistration contributed to significant uncertainty in the area mapped as mangrove change. Consequently, it was advised that only net changes, rather than separate gains and losses, be reported when utilizing the GMW v3.0 data. Additionally, for many end users, for which the GMW v3.0 mapping constitutes their primary source of up-to-date and readily available mangrove extent data for their region of interest, the existing accuracy at the local level and minimum mapping unit is often insufficient. Therefore, the primary aim of GMW v.4.0 has been to improve the local relevance of the products. To achieve this aim, the new GMW v4.0 baseline map for 2020 has a spatial resolution of 10 m and the historical change layers are now produced using a combination of Landsat and JAXA L-band SAR data. JAXA has reprocessed the JERS-1, ALOS PALSAR and ALOS-2 PALSAR-2 global mosaic datasets to remove the misregistration from the products used to generate the GMW v3.0 maps. JAXA has also provided all the available observations of JERS-1 data from 1992 to 1998 rather than the single mosaic for 1996, which was used previously. For the new 2020 baseline, a time series composite was generated for each Sentinel-2 granule. This composite included the 10th, 50th, and 90th percentiles for each reflectance band and index, such as NDVI, NDWI, NBR, NDGI, EVI, and ANIR. A global XGBoost classifier was trained, and Boruta SHAP feature selection was utilized to reduce the number of variables in the model. A transfer learning step was applied, in which the global XGBoost model was further trained using local reference samples for each granule. The resulting map underwent visual checks and refinements to produce the final GMW v4.0 baseline. An accuracy assessment was conducted using 44 globally distributed sites, providing a total of 49,600 reference points. The true class for each point was identified with reference to PlanetScope and other higher-resolution image sources. The accuracy of the mangrove class was estimated to be 95.3% (with a 95% confidence interval of 94.9% to 95.7%). In comparison, using the same reference points, the accuracy of the GMW v3.0 2020 map was estimated at 81.4% (with a 95% confidence interval of 80.4% to 82.2%). Change mapping was conducted using the newly defined GMW v4.0 baseline by combining Landsat imagery with JAXA L-band SAR data. Landsat reflectance composites were created for 1990, 1995, 2000, 2005, 2010, 2015, 2020, and 2024 using Google Earth Engine. A Multivariate Alteration Detection (MAD) change detection method was applied to each Landsat composite to identify pixels that had not changed from 2020. The 2020 training samples within these no-change regions were selected, and a global XGBoost classifier was trained and applied to each Landsat composite. The classification results and change detection from Landsat were then merged with the 2020 baseline classification to produce a final classification for each Landsat composite. These classifications were utilized to constrain the map-to-image change detection approach applied to the L-band SAR data, following the GMW v3.0 change methodology. The resulting mangrove change maps extend the time series of mangrove changes back to 1990 and significantly increase the number of observations within the 1990s, thereby enhancing the confidence in the mangrove extent estimates for that decade. The new change results also enable independent calculations of annual mangrove gains and losses, allowing the GMW v4.0 datasets to be used as Activity Data for national reporting to the UNFCCC. The new v4.0 products also significantly improved the mapping of landward mangrove changes. For example, areas such as the Andaman and Nicobar Islands, which experienced significant in-land mangrove changes after the 2004 Boxing Day Earthquake and Tsunami, were not captured by the GMW v3.0 products.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: A pan-European monitoring of the wetland use intensity in coastal zones

Authors: Jonas Franke, Kevin Kuonath, Anis Guelmami, Nina Bègue
Affiliations: Remote Sensing Solutions, Tour du Valat
All major European Union (EU) policies acknowledge the critical role of wetlands in achieving the EU's goals related to climate neutrality, biodiversity conservation, pollution reduction, flood regulation, and the circular economy. As a result, evaluating the current extent and condition of European wetlands, including their capacity for long-term mitigation through restoration or other conservation measures, is a top priority for the EU in addressing climate change. With increasing extreme weather events, sea level rise and intensified land use, coastal wetlands are playing an important role as buffer zones with a wide range of ecosystem services. Land cover and land use are indicators for wetland status, since land use is a main driver of coastal wetland degradation and loss. However, land cover/use categories cannot reflect the wide variation of the actual wetland use intensity. For example, the intensity of grassland use (e.g. number of mowings), the type of agricultural land use or any disturbances (such as fires or deforestation) have a major influence on the ecosystem. To complement the monitoring of wetlands with information beyond land cover/use categories, time series of Sentinel-2 data were leveraged. This new approach, developed in the Horizon Europe RESTORE4Cs project, resulted in a pan-European layer on coastal wetland use intensity (WUI) for the year 2023. Wetland use intensity refers to the degree or extent to which wetlands are utilized for various purposes and indicates how wetlands are being impacted or exploited through various human activities which may affect the ecological health or function of the wetland. The pan-European WUI layer for coastal wetlands differentiates intensively used wetland areas, such as crop cultivation, burned areas, peat extraction, etc. from less intensively used areas, such as grazing areas for livestock farming as well as natural/semi-natural areas and permanent water. WUI can be described as the magnitude of changes in spectral properties over time. An increasing WUI can impact the health of the wetland, potentially leading to degradation, loss of biodiversity, or changes in the hydrological and biogeochemical functions of the ecosystem. Thus, knowledge about the WUI is fundamental for managing wetland use and is crucial for prioritizing protection and restoration efforts and for maintaining the balance between human needs and ecological preservation. The WUI bases on the algorithm for time series analysis, the Mean Absolute Spectral Dynamics (MASD), developed by Franke et al. (2012). The MASD algorithm assesses the average spectral change (absolute magnitude) across selected spectral bands across a certain number of timesteps over a growing season. A timestep is the time between two cloud-free or cloud-masked scenes, for which spectral change is measured. Ideally, all timesteps are four weeks long, with the first scene capturing the start of the growing season and the last one the end of the growing season. For the cloud-based processing of the pan-European coastal WUI layer, an automatic Sentinel-2 scene selection procedure was developed that maximally follows this logic. Areas with persistent cloud cover, in which the time series did not meet the minimum requirement of coverages per season, were indicated as such in the final WUI layer. Since the MASD is sensitive to the observation lengths and observation density of the satellite time series, it was modified here to minimize the impact of varying time series inputs. Averaged daily MASD values provided more temporally stable values that allowed for a scale up and for more comparable WUI values across regions. Since the aim was to assess WUI mainly in vegetated wetland areas, only vegetation-sensitive bands were selected to calculate the MASD. To account for a balance between the spectral ranges and the spatial resolution of the Sentinel-2 bands, the green and red band in the VIS, two bands from the red-edge and near infrared (NIR) and two bands in the short-wave infrared (SWIR) were used. In order to generate a WUI layer that is focusing on coastal wetlands and which is spatially coherent to other wetland information layers, it was processed within areas likely to host wetland habitats, with a high probability of presence calculated using the potential wetland areas (PWA) layer (Guelmami, 2023). Since surface water dynamics in the coastal wetlands also cause spectral changes that can cause confusion in the WUI interpretation through high MASD values, seasonal and permanent water bodies were identified in the coastal wetlands for the same year (2023) and considered complementary to the WUI. All Sentinel-1 scenes were used to assess the surface water dynamics, following an approach described in Tøttrup et al. (2022). The final WUI layer provides insights into wetland use and status beyond classic land use categories. It is a complementary dataset to other wetland information layers that can be used to identify pressures from agricultural activities, over-use of resources and it can help to find priority areas for restoration actions and protection enforcement within the wetlands. Being produced annually, the WUI can indicate trends and being used for impact assessment of protection or restoration measures.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Global Wetland Watch – A new system for globally mapping and monitoring changes to wetland ecosystems

Authors: Daniel Druce, Puzhao Zhang, Gyde Krüger, Dr. Cécile Kittel, Torsten Bondo, Christian Tøttrup
Affiliations: DHI, DHI
Wetlands are vital ecosystems that play a crucial role in regulating water quality, supporting biodiversity, and acting as carbon sinks. However, they are often poorly mapped and characterized, creating a significant information gap. To address this, DHI, in collaboration with UNEP and funded by Google.org, is developing the Global Wetland Watch (GWW) system. This innovative tool utilizes satellite earth observation data and AI to map over 20 different wetland types providing the first high-resolution and globally consistent wetland assessment. The data, which will be released freely as a public good, is vital for efforts in conservation, sustainable development, biodiversity, and climate change mitigation. It will also support national policies and legal frameworks aimed at protecting and restoring wetland ecosystems, helping countries meet targets set out in important global agendas such as the Ramsar convention, and the UN 2030 Agenda on Sustainable Development (SDG 6.6).The GWW methodology prioritises a time-series approach to harness the spectral, phenological, and hydrological characteristics unique to wetlands. This approach employs harmonic regression through the continuous change detection and classification (CCDC) algorithm, utilizing data from multiple satellite sensors Sentinel-1, Sentinel-2, PALSAR-2 and Landsat 8/9. Additionally, the system incorporates supplementary layers designed to enhance decision making. These layers provide interpretable, explainable, and adaptable data on key wetland characteristics, such as surface water extent dynamics, Height Above Nearest Drainage (HAND), and coastal geomedian and inundation frequencies, ensuring the system remains user-centric and accessible for diverse stakeholders. Through engagement with multiple pilot countries, the methodology and relevance of the work has been refined, and the system will serve as a benchmark for wetland inventories and monitoring by producing the world’s first high-resolution, multi-class, globally consistent wetland assessment. It will provide a foundation for future advancements, ensuring the system remains accurate, relevant, and capable of supporting ongoing conservation strategies, policy decisions and global initiatives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: National Mapping of Wetland Habitats in mainland France

Authors: Anis Guelmami, Nina Bègue, Sébastien Rapinel, Léa Panhelleux, Guillaume Gayet, Laurence Hubert-Moy, Hugo Potier
Affiliations: Tour du Valat, University of Rennes 2, PatriNat
Wetlands face increasing threats and degradation globally, with an estimated 35% decline in area between 1970 and 2015 (Darrah et al., 2019), rising to 48% in the Mediterranean region during the same period (MWO, 2018). In Europe, this loss is primarily driven by urban sprawl and agricultural expansion and intensification (Čížková et al., 2013). Conservation efforts have led to the designation of protected areas such as Ramsar and Natura 2000 sites, but their ecological effectiveness remains underexplored due to the absence of wetland-specific evaluation tools (Munguía & Heinen, 2021). Furthermore, wetland monitoring is often incomplete and imprecise, focusing mainly on heritage sites through field observations (Perennou et al., 2012; Darrah et al., 2019) or relying on coarse-scale global land use and land cover (LULC) maps. Such maps frequently fall short for wetland monitoring, as they fail to provide a national picture of the total extent of wetland habitats (Perennou et al., 2012; Perennou et al., 2018; Rapinel et al. 2023). This study, focused on mainland France, aims to address these gaps by (i) develop a detailed, fine-scale national map of wetland habitats across mainland France, (ii) characterize and differentiate wetland habitats using advanced EO techniques combined with archival field data for robust modeling, and (iii) provide spatial data to support decision-making for sustainable wetland management, and habitat and biodiversity conservation and restoration. To achieve these objectives, the proposed approach combines advanced Earth Observation (EO) techniques alongside underutilized archival field plots to produce a detailed 10 m resolution national map of wetland habitats across mainland France. It is built upon an existing methodology to map wetland ecosystems extent across mainland France (Rapinel et al., 2023) and integrates spectral variables derived from Sentinel-2 time series, bioclimatic variables generated using Worldwide Bioclimatic Classification System (Perrin et al., 2020), phenological variables derived from NDVI indices (Orusa et al., 2023; Peng et al., 2023), and topo-hydrological metrics such as Topographic Wetness Index (TWI), Multi-Resolution Topographic Position Index (MRTPI), and Vertical Distance to Channel network (VDC). Geological data from the national geodatabase, as well as surface water dynamics derived from global datasets, are also integrated. Additionally, vegetation height data, produced by Lang et al. (2020) and freely available globally on Google Earth Engine, are used to refine habitat characterization. The study uniquely utilizes underused national archives, including historical biodiversity surveys and local wetland inventories, to calibrate and validate habitat classification models. This is enhanced by new field data collected during the implementation of this study, ensuring recent and accurate inputs for algorithm processing. The methodology involved processing Sentinel-2 2017-2022 dense time series to create pseudo-annual composites for robust habitat discrimination. A machine learning based hierarchical Random Forest model was applied to classify habitats using the EUNIS nomenclature. Post-processing, including segmentation for the smoothing of the classification rasters and expert validation, enhanced the reliability of the results and ensured the accurate identification and inclusion of rare or underrepresented wetland habitat types. The study delivered a 10 m resolution map of wetland habitats at the national scale of mainland France, offering essential data for evaluating ecosystem functions, guiding sustainable management, prioritizing areas for wetlands preservation and/or restoration and informing policies and strategies related to water and land use planning. It marks a major step forward in national-scale wetland ecosystem assessment. Keywords: Wetland habitats, National mapping, Earth Observation, EUNIS, Sentinel-2, Field data archives.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 1.15/1.16)

Presentation: Mapping Tropical Wetlands Extent and Dynamics over 10 Years by ALOS-2 PALSAR-2

Authors: Ake Rosenqvist, Greg Oakes, Jessica Rosenqvist, Dr Pete Bunting, Dr Andy Hardy, Bruce Forsberg, Pedro Barbosa, Thiago Silva, Kazufumii Kobayashi, Takeo Tadono
Affiliations: solo Earth Observation (soloEO), Japan Aerospace Exploration Agency (JAXA), Aberystwyth University, soloEO, Inst. for Amazonia Research (INPA), Université du Quebec (UQAM), University of Stirling, Remote Sensing Technology Center (RESTEC)
Earth observation provides opportunities to address the information needs of the Ramsar Convention for the monitoring and reporting on key wetland indicators, including progress on the Sustainable Development Goals, where SDG Indicator 6.6.1 – Change in the extent of water-related ecosystems over time – is of particular relevance to the work presented here on mapping and monitoring major inland freshwater (floodplain) wetlands. Floodplain forests are a dominant ecosystem in meandering river basins with moderate topography, where they provide important habitats for aquatic flora and fauna, and critical ecosystem services for riverine communities. Seasonal inundation is a dominant environmental factor affecting floodplain forest ecosystems and the timing, duration and amplitude of flooding vary spatially on the floodplain as a function of fluctuations in river stage height and topography. Floodplain forests sequester carbon as they grow, but are also significant sources of methane (CH4) and other trace gases essential to climate regulation. In river basins with low topography, floodplain forests can constitute more than 10% of the total basin area, e.g. corresponding to around 600,000 km2 in the Amazon Basin alone [1, 2]. While previous studies have mapped the maximum and minimum extents of inundation in the Amazon Basin with significant detail, what remains lacking is data on the temporal and spatial dynamics of the inundation patterns, both within years and between different years. Recent increases in the intensity and duration of droughts due to climate change are threatening the integrity of the whole floodplain ecosystems, as manifested by the historical low water levels in the Amazon Basin in 2023 and 2024, and a synoptic view of how the flooding patterns across the basin are changing is needed. L-band SAR has a proven long track record in mapping and detection of forest inundation, thanks to the capacity of the long (23.5 cm) wavelength signal to penetrate a forest canopy and interact with the ground or a water surface below. As part of JAXA’s systematic acquisition strategies for ALOS PALSAR and ALOS-2 PALSAR-2, L-band SAR data have been acquired across the entire pan-tropical zone zone in the Americas, Africa and Asia-Pacific on a regular (every 6 weeks) basis since 2007, with additional historical coverage by JERS-1 SAR available from the 1990s. Within the project described here, ALOS-2 PALSAR-2 wide-swath (ScanSAR) data acquired between 2014 and 2024 over the Amazon basin was used, corresponding to about 90 coverages over the 8 million km2 basin. The data were processed by JAXA to CEOS Analysis Ready Data (CEOS-ARD) Normalised Radar Backscatter (NRB) format, which includes full geometric and radiometric terrain corrections, and provided as HH and HV polarisation gamma-0 backscatter as 1 x 1 degree image tiles at 50 m pixel spacing. The image classification method employed, termed “RadWet-L” was developed by Aberystwyth University. RadWet-L uses the PALSAR-2 tile data together with ancillary datasets, such as hydrological terrain (HAND) metrics, DEMs and Land Cover maps, to automatically generate training data for open water and inundated vegetation. This data is then used to train an XGBoost machine learning classifier, which is subsequently applied to serial PALSAR-2 tiles across the area of interest. The RadWet-L algorithm and methods for proxy validation are described in [3]. The RadWet-L software is light-weight and transferable, and was run as a Docker image on a 4-core laptop computer. Processing of a single observation cycle over the Amazon Basin, typically comprising some 800-1000 PALSAR-2 image tiles, takes about 5 hours. The classified tiles featured three main classes; inundated vegetation, open water, and an “other” class which includes masked areas and anything that is not classified as any of the former two classes. The tiles were subsequently mosaicked into 90 basin-wide inundation extent maps, one for each PALSAR-2 observation cycle. The nine inundation extent maps for each year were in turn merged into annual inundation duration maps, where the value for each pixel represents the number of times that particular geographic location has been classified as inundated vegetation in the annual time-series. The inundation duration maps thus describe both the spatial and the temporal characteristics of inundation and constitute a unique new data source for forested wetlands. Potential applications include, amongst others, assessment of inundation patterns and change trends, ecosystem stratification and habitat mapping, and input to regional models for CH4 and other trace gas emissions, thus addressing information needs of international conventions and frameworks, such as Ramsar, the CBD GBF, and, not least, SDG Indicator 6.6.1. At the time of writing (30 Nov 2024) the Amazon data processing has just been completed and analysis of the extent and duration data is to begin. Next steps in the project also include extension to other significant wetland areas, including the Congo river basin, the Pantanal and Sudd wetlands, and forested wetlands in Southeast Asia as well as extending the time-series by use of ALOS PALSAR data from 2007-2010. Initial results from these studies will be presented at the session. Acknowledgement: This work has been undertaken within the framework of the JAXA Kyoto & Carbon Initiative. The ALOS-2 PALSAR-2 ScanSAR data has been provided by JAXA EORC [1] Hess L.L., Melack J.M., Affonso A.G., Barbosa C., Gastil-Buhl M. and Novo E.M.L.M. Wetlands of the Lowland Amazon Basin:
Extent, Vegetative Cover, and Dual-season Inundated Area as Mapped with JERS-1 Synthetic Aperture Radar. Wetlands (2015) 35:745–756. doi:10.1007/s13157-015-0666-y [2] Rosenqvist J., Rosenqvist A., Jensen K., and McDonald K. Mapping of Maximum and Minimum Inundation Extents in the Amazon Basin 2014–2017 with ALOS-2 PALSAR-2 ScanSAR Time-Series Data. Remote Sens. 2020, 12, 1326, doi.org/10.3390/rs12081326 [3] Oakes, G.; Hardy, A.; Bunting, P.; Rosenqvist, A. RadWet-L: A Novel Approach for Mapping of Inundation Dynamics of Forested Wetlands Using ALOS-2 PALSAR-2 L-Band Radar Imagery. Remote Sens. 2024, 16, 2078. https://doi.org/10.3390/rs16122078
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.14)

Session: F.05.05 Copernicus4regions: meet the community of the Copernicus regional and local users and providers

The joint ESA/EC/NEREUS Copernicus4regions initiative succeeded in establishing a lively community along the years, which meets regularly at the occasion of events at the European Parliament and at NEREUS regional symposia to showcase the regional experiences including to high-level political representations (see https://www.nereus-regions.eu/copernicus4regions/). Currently, NEREUS is working on the selection of new Copernicus User Stories, to refresh and enrich its broad collection of examples, as well as on the organisation of events at the European Parliament and at the Committee of the Regions. This early breakfast provides a chance to meet the community and discover inspiring new user stories that showcase the impact of using Copernicus on citizens and on the work of regional administrations.

Moderators:


  • Alessandra Tassa - ESA
  • Roya Ayazi - NEREUS
  • Margarita Chrysaki - NEREUS

Speakers:


  • Macjek Mysliviek - Space Agency
  • Marcel Simoner - UIV Urban Innovation Vienna GmbH
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Session: A.01.05 Ozone and its precursors through the Atmosphere: Advances in understanding and methods

Ozone is a fundamentally important constitute of the atmosphere, in the troposphere it is a greenhouse gas and pollutant that is detrimental to human health and crop and ecosystem productivity. In the troposphere data is available from ozonesondes, aircraft, and satellites, but high levels of uncertainty biases remain. While in the stratospheric ozone protects the biosphere from UV radiation, long-term observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped, as a result of the Montreal protocol. Future stratospheric ozone levels depend on changes on many factors including the latitude domain and interactions with the troposphere, and potentially the mesosphere.

This session is detected to presentation of methods and results for furthering the understanding of the distribution of ozone and its precursors through the atmosphere through remote sensing techniques, with particular emphasis on advanced methods with past and current missions such as OMI and Sentinel-5P, and preparing for future missions such as ALTIUS, Sentinels 4 & 5 and their synergies with other missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Machine Learning to Construct Daily, Gap-Free, Long-Term Stratospheric Trace Gases Data Sets

Authors: Sandip Dhomse, Professor Martyn Chipperfield
Affiliations: University Of Leeds
Understanding the complex relationship between ozone and various trace gases influencing its concentrations necessitates continuous and reliable datasets. However, obtaining comprehensive long-term profiles for key trace gases is a significant challenge. Our research addresses this issue by merging data from a Chemical Transport Model (CTM) and satellite instruments (HALOE and ACE-FTS). This integration results in the creation of daily, gap-free datasets for six crucial gases: ozone (O3), methane (CH4), hydrogen fluoride (HF), water vapour (H2O), hydrogen chloride (HCl), and nitrous oxide (N2O) from 1991 to 2021. Chlorofluorocarbons (CFCs) are a critical source of chlorine that controls stratospheric ozone losses. Currently, ACE-FTS is the only instrument providing sparse but daily measurements of these gases. Monitoring changes in these ozone-depleting substances, which are now banned, helps assess the effectiveness of the Montreal Protocol. We have initiated the construction of gap-free stratospheric profile data for CFC-11 as a subsequent step. We use XGBoost regression model to estimate the relationship between various tracers in a CTM and the differences between the CTM output field and the observations, assuming all errors are due to the CTM setup. Once the regression model is trained for observational collocations, it is used to estimate biases for all the CTM grid points. To enhance accuracy, we employed various regression models and found that XGBoost regression outperforms other methods. ACE-FTS v5.2 data (2004-present) is used to train (70%) and test (30%) the XGBoost performance. Our results demonstrate excellent agreement between the constructed profiles and satellite measurement-based datasets. Biases in TCOM data sets, when compared to evaluation profiles, are consistently below 10% for mid-high latitudes and 50% for the low latitudes, across the stratosphere. The constructed daily zonal mean profile datasets, spanning altitudes from 15 to 60 km (or pressure levels from 300 to 0.1 hPa), are publicly accessible through Zenodo repositories. CH4: https://doi.org/10.5281/zenodo.7293740 N2O:  https://doi.org/10.5281/zenodo.7386001 HCl : https://doi.org/10.5281/zenodo.7608194 HF: https://doi.org/10.5281/zenodo.7607564 O3: https://doi.org/10.5281/zenodo.7833154 H2O: https://doi.org/10.5281/zenodo.7912904 CFC-11: https://doi.org/10.5281/zenodo.11526073 CFC-12: https://doi.org/10.5281/zenodo.12548528 COF2: https://doi.org/10.5281/zenodo.12551268 In an upcoming iteration, we are enhancing the algorithm (e.g. hyperparameter tuning, feature engineering, neural network) as well as add more species in the current setup. We believe these data sets would provide valuable insights into the dynamics of stratospheric trace gases, furthering our understanding of their behaviour and impact on the ozone layer. References: Dhomse, S. S., et al.,: ML-TOMCAT: machine-learning-based satellite-corrected global stratospheric ozone profile data set from a chemical transport model, Earth Syst. Sci. Data, 13, 5711–5729, https://doi.org/10.5194/essd-13-5711-2021, 2021. Dhomse, S. S. and Chipperfield, M. P.: Using machine learning to construct TOMCAT model and occultation measurement-based stratospheric methane (TCOM-CH4) and nitrous oxide (TCOM-N2O) profile data sets, Earth Syst. Sci. Data, 15, 5105–5120, https://doi.org/10.5194/essd-15-5105-2023, 2023.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Total Column Ozone Retrieval Using the BTS Array Spectroradiometer and a Custom Double Ratio Technique

Authors: Dr. Luca Egli, Dr. Julian Gröbner, Dr. Eliane Maillard Barras
Affiliations: Physikalisch Meteorologisches Observatorium and World Radiation Center, Davos, Federal Office of Meteorology and Climatology, MeteoSwiss
Over the past five years, PMOD/WRC has developed and extensively validated a groundbreaking system named Koherent for the precise measurement of total column ozone (TCO). This innovative system is built around a compact, cost-efficient, and low-maintenance commercial array spectroradiometer, offering a robust solution for long-term atmospheric monitoring. During its five-year operational period, Koherent demonstrated exceptional reliability, achieving over 99% data acquisition uptime. The system employs a BTS-2048-UV-S-F array spectroradiometer developed by Gigahertz-Optik GmbH. This spectroradiometer is integrated with an optical fiber connected to a lens-based telescope mounted on a sun tracker. This configuration enables the measurement of direct ultraviolet (UV) irradiance in the wavelength range of 305–345 nm, a critical band for ozone studies. A key innovation of Koherent is the implementation of the Custom Double Ratio (CDR) technique, a novel algorithm that utilizes four specifically selected wavelengths from the spectral data to derive TCO. This algorithm is calibrated against ultraviolet reference instruments, achieving accuracy comparable to that of a single-monochromator Brewer spectrophotometer. Koherent’s flexibility is enhanced by its ability to be field-calibrated during campaigns using reference instruments such as Brewer spectrophotometers. Through a two-point calibration method combined with adjustments to the absorption coefficient and extraterrestrial constant, the system demonstrates excellent agreement with existing TCO monitoring networks. Notably, a five-year comparison of TCO measurements between Koherent and Brewer 156 in Davos, Switzerland, revealed an impressive average agreement within 0.05% ± 0.88%. Beyond ozone concentration, the CDR algorithm enables the determination of effective ozone layer temperature with a daily average precision of 3 K, utilizing parameterizations derived from historical balloon soundings. This capability underscores Koherent's multifaceted utility. The integration of a state-of-the-art instrument with an advanced retrieval algorithm makes Koherent a promising candidate for the next generation of TCO monitoring systems. Its high reliability, accuracy, and operational efficiency position it as a valuable tool for global atmospheric studies and long-term environmental monitoring.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Ozone Recovery from Merged Observational Data and Model Analysis (OREGANO)

Authors: Mark Weber, Brian Auffarth, Carlo Arosio, Alexei Rozanov, Andreas Richter, Martyn Chipperfield, Sandip Dhomse, Wuhu Feng, Viktoria Sofieva, Monika Szelag, Andreas Chrysanthou, Kleareti Tourpali, Edward Malina
Affiliations: Institute of Environmental Physics (IUP), University of Bremen, School of Earth and Environment, University of Leeds, Finnish Meteorological Institute (FMI), Aristotle University Thessaloniki, ESA ESRIN
Stratospheric ozone (the "ozone layer") protects the biosphere from harmful UltraViolet (UV) radiation. It is expected to recover due to the Montreal Protocol signed in 1987 and its Amendments regulating the phase-out of ozone-depleting substances (ODS). The amount of stratospheric halogen (mainly bromine and chlorine) released by ODSs reached its max-imum abundance in the middle of the 1990s. Observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped. Future stratospheric ozone levels depend not only on changes in ODS but also on changes in greenhouse gases (GHG) and possibly stratospheric aerosols. The latter modifies both the chemistry and dynamics (transport, circulation) of ozone. The rate of ozone recovery thus depends on the geographic region and altitude. According to most chemistry-climate models, ozone in some altitude domains, like the lower tropical stratosphere, will likely continue to decline. At middle latitudes, the current trends in lower stratospheric ozone re-main highly uncertain in part due to larger uncertainties in observational data and larger year-to-year variability in ozone. A clear sign of ozone recovery is evident in the upper strato-sphere. The major goal of the OREGANO project is to advance our understanding of ozone recovery using a combination of observations and model analyses. The following topics will be high-lighted in this presentation: • Long-term ozone column and profile trends up to the end of 2024 from models and observations in support of the upcoming WMO/UNEP Ozone Assessment; • Impact of atmospheric dynamics and chemistry on polar and extrapolar ozone; • Tropospheric ozone trends in support of the IGAC Tropospheric Ozone Assessment Report Phase 2 (TOAR-2) • Role of tropospheric ozone in column ozone trends; • Evaluation of the bromine monoxide - chlorine monoxide (BrO-ClO) cycle using nadir BrO and chlorine dioxide (OClO) observations; • Impact of aerosol and GHG changes on stratospheric ozone trends. Recommendations for future satellite missions and programs will be made to maintain continued ozone monitoring.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Extension of the S5P-TROPOMI CCD tropospheric ozone retrieval to mid-latitudes

Authors: Swathi Maratt Satheesan, Kai-Uwe Eichmann, Mark Weber
Affiliations: University of Bremen
Tropospheric ozone, a key atmospheric pollutant and greenhouse gas, shows significant spatio-temporal variability on seasonal, inter-annual, and decadal scales, posing challenges for satellite observation systems. Traditional methods like the Convective Cloud Differential (CCD) and Cloud Slicing Algorithms (CSA) are effective for Tropospheric Column Ozone (TCO) retrieval but are typically restricted to the tropical region (20°S-20°N). The CCD approach has been successful with satellite sensors like Aura OMI, MetOp GOME-2, and Sentinel-5 Precursor TROPOMI. In this study, we present the first application of the CCD retrieval method outside the tropical region, introducing CHORA-CCD (Cloud Height Ozone Reference Algorithm-CCD) to retrieve TCO from TROPOMI in the mid-latitudes. The approach uses a local cloud reference sector (CLCD, CHORA-CCD Local Cloud Decision) to estimate the stratospheric (above-cloud) column ozone (ACCO), which is then subtracted from the total column under clear-sky scenes to determine TCO. This method minimizes the impact of stratospheric ozone variations. An iterative process automatically selects an optimal local cloud reference sector around each retrieval grid point, varying the radius from 60 to 600 km to estimate the mean TCO. Due to the prevalence of low-level clouds in mid-latitudes, the estimation of TCO is constrained to the column up to a reference altitude of 450 hPa. In cases where cloud-top heights in the local cloud sector are variable, an alternative approach is introduced to directly estimate the ACCO down to 450 hPa using Theil-Sen regression. This method allows for the combination of the CCD approach with the CSA. The algorithm dynamically selects between CCD and the Theil-Sen method for ACCO estimation based on an analysis of cloud characteristics. The CLCD algorithm is further optimised by incorporating a homogeneity criterion for total ozone, addressing potential inhomogeneities in stratospheric ozone. Monthly averaged CLCD-TCOs for the time period from 2018 to 2022 were calculated from TROPOMI for the mid-latitudes (60°S–60°N). The accuracy of the CLCD algorithm was assessed by comparing the retrieved TCO with spatially collocated HEGIFTOM-SHADOZ/WOUDC/NDACC ozonesonde data from thirty-two stations. The validation results demonstrate that TCO retrievals at 450 hPa using the CLCD method show good agreement with the ozonesonde measurements at most stations. This study demonstrates the advantages of using a local cloud reference sector in mid-latitudes, providing an important basis for systematic applications in current and future geostationary satellite missions.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Synergistic Use of Limb and Nadir Observations for Studying Stratospheric Ozone Intrusions in the Himalayan Region.

Authors: Dr Liliana Guidetti, Dr Erika Brattich, Dr Simone Ceccherini, Dr Michaela Hegglin, Patrick Joeckel, Dr Xiaodan Ma, Piera Raspollini, Dr Cecilia Tirelli, Ing Nicola Zoppetti, Ugo Cortesi
Affiliations: IFAC-CNR, Università di Bologna, Forschungszentrum Jülich GmbH ICE-4, Institut fuer Physik der Atmosphaere - DLR
Stratospheric intrusions play a crucial role in the exchange of ozone between the stratosphere and troposphere, with significant implication for surface air quality, radiative forcing, and climate dynamics. These events are particularly relevant in regions like the Himalayas, recognized as one of the main hotspots because of the unique topography and complex meteorological conditions. Despite their importance, our comprehension of stratospheric ozone intrusions processes and their impacts on tropospheric ozone variability is still limited, especially due to deficiencies and limitations in our observational networks and modeling techniques. Within this framework, satellite remote sensing observations can help address these limitations by providing extensive spatial and temporal coverage, filling the gaps left by ozonesondes, which, while offering high vertical resolution, suffer from sparse coverage. This study investigates the potential of combining limb-viewing measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) with nadir-viewing observations from the Infrared Atmospheric Sounding Interferometer (IASI) in the detection and characterization of stratospheric intrusion events. The limb observation geometry of MIPAS provides high vertical resolution, which is particularly critical for capturing the fine-scale ozone gradients at the tropopause. On the other hand, the nadir-viewing capabilities of IASI ensure broad horizontal coverage, which complements the spatial limitations of MIPAS. By integrating these two distinct datasets, we aim to harness their complementary strengths, enabling a more comprehensive analysis of stratospheric ozone intrusions. These datasets are fused using the Complete Data Fusion (CDF) method, an algebraic approach rooted in optimal estimation theory. This method harmonizes the individual retrievals from MIPAS and IASI into a unified dataset. Our study begins by identifying a specific intrusion event occurred within the overlapping operational period of the two instruments (2008–2012). The fused dataset is then validated with two methodologies: through comparisons with model reanalysis profiles, and independent ozone measurements from radiosonde profiles, providing insights into its accuracy and reliability. We finally exploit the newly fused dataset in combination with meteorological and composition variables from different models, including ERA5 and CAMS reanalysis, and EMAC model simulations in order to gain a more robust understanding of these phenomena. These comparisons can highlight the potential of the fused dataset to bridge observational and model-based approaches, offering a more robust understanding of these phenomena.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Room 0.11/0.12)

Presentation: Geostationary Satellites Total Ozone Observations: First Results and Ground-based Validation Efforts for TEMPO and GEMS

Authors: Chris McLinden, Xiaoyi Zhao, Debora Griffin, Vitali Fioletov, Xiong Liu, Junsung Park, Irina Petropavlovskikh, Tom Hanisco, James Szykman, Lukas Valin, Alexander Cede, Martin Tiefengraber, Manuel Gebetsberger, Itaru Uesato, Xiangdong Zheng, Soi Ahn, Limseok Chang, Won-Jin Lee, Jae Hwan Kim, Kanghyun Baek, Alberto Redondas, Masatomo Fujiwara, Ting Wang, Sum Chi Lee
Affiliations: Environment and Climate Change Canada, Harvard & Smithsonian Astrophysical Observatory, NOAA, NASA, US-EPA, LuftBlick, Japan Meteorological Agency, Chinese Meteorological Agency, National Institute of Environmental Research, Pusan National University, State Meteorological Agency, Hokkaido University, Institute of Atmospheric Physics Chinese Academy of Sciences
The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite instrument, launched in April 2023, is the first geostationary atmospheric monitoring instrument over North America. It forms part of a global geostationary constellation with Asia’s Geostationary Environment Monitoring Spectrometer (GEMS) launched in 2020 and Europe’s upcoming Sentinel-4. TEMPO and GEMS offer hourly, high-resolution air pollution and ozone monitoring from space, improving on the once-daily observations of instruments like the TROPOspheric Monitoring Instrument (TROPOMI). This study presents the analysis of TEMPO’s total ozone data, demonstrating TEMPO’s ability to observe sudden changes in ozone (and thus UV index). Further, the first validation of TEMPO and GEMS ozone is presented using ground-based networks (Brewer, Dobson, and Pandora). Results show good correlations between the geostationary datasets with ground observations but also highlight latitude-dependent discrepancies (-2% to 2% for TEMPO, -1% to -3% for GEMS) and solar zenith angle (SZA) dependency issues. Both TEMPO and GEMS data require SZA corrections, though the magnitude of these corrections differs. After applying SZA corrections, both instruments show good agreement with ground-based measurements in capturing diurnal variations. For latitude dependency, TEMPO data can be effectively corrected using a viewing zenith angle (VZA) empirical approach, as it lacks pronounced seasonal variation. In contrast, GEMS exhibits latitude dependency with a seasonal component, necessitating more advanced correction methods in future work. Findings are further validated using TROPOMI and reanalysis data sets (ECWMF’s ERA5 and NASA GMAO’s MERRA-2). Overall, the data quality (accuracy and precision) from these geostationary satellite instruments appears to be good, indicating their potential for reliable ozone and UV index monitoring. While these new satellite instruments can observe ozone well, some issues affecting data quality can be identified, especially when they are compared to the more established polar-orbiting satellite instruments.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Session: B.04.01 Satellite based terrain motion mapping for better understanding geohazards. - PART 1

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: InSAR-based regional land subsidence risk assessment in the Emilia Romagna (Italy)

Authors: Roberta Bonì, Andrea Taramelli, Leila Goliraeisi, Dr. Francesca Cigna, Prof Pietro Teatini, Dr Roberta Paranunzio, Dr Claudia Zoccarato
Affiliations: Department of Science, Technology and Society (STS), University School for Advanced Studies (IUSS), Institute of Atmospheric Sciences and Climate (ISAC), National Research Council (CNR), Department of Civil, Environmental and Architectural Engineering (ICEA), University of Padua (UNIPD)
According to the Sendai Framework for Disaster Risk Reduction, Risk is a function of the combined effects of hazards, the assets or individuals exposed to these hazards, and the vulnerability of those exposed elements. Priority 1 of this framework focuses on "Understanding Disaster Risk". The Sendai Framework for Disaster Risk Reduction 2015-2030 report highlights the growing exposure of people and assets across all countries, which is increasing at a rate that outpaces the reduction of vulnerability, leading to new risks and a consistent rise in socio-economic and environmental losses. Urban areas constructed in regions affected by groundwater pumping-induced subsidence may experience damage if they cannot support differential settlements beneath their foundations. Therefore, assessing the building level of risk in these areas is crucial for enhancing current awareness and informing future urban planning. In this study, we present a new methodology for assessing land subsidence risk at the regional scale. This approach has been developed and tested in the Emilia Romagna region, which is located in the Po Plain area; a sedimentary basin characterized by significant ground deformation, exhibiting high spatial and temporal variations due to both natural and anthropogenic factors. We utilize measurements of vertical and horizontal ground deformation obtained from Interferometric Synthetic Aperture Radar (InSAR) data, collected between 2018 and 2022 via Copernicus’ European Ground Motion Service, to calculate the hazards associated with differential subsidence. Additionally, for the exposure-vulnerability, global datasets, such as the Global Human Settlement Layer and the World Settlement Footprint (WSF) Evolution, along with a regional dataset provided by the Emilia Romagna Region, are employed to determine building types (i.e., residential versus non-residential), periods of construction, and building heights. The hazard map generated from land subsidence is then combined with the exposure-vulnerability map using a risk matrix to evaluate four risk levels, ranging from very low to very high (e.g., R1 to R4). The results of the proposed approach provide a basis for evaluating land subsidence risks in other urbanized areas vulnerable to this phenomenon. This facilitates geohazard assessments and enhances understanding of associated risks. This work is funded by the European Union – Next Generation EU, component M4C2, in the framework of the Research Projects of Significant National Interest (PRIN) 2022 National Recovery and Resilience Plan (PNRR) Call, project SubRISK+ (grant id. P20222NW3E), 2023-2025 (CUP B53D23033400001).
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Estimating Lava Extent and Quantifying Terrain Changes Using Daily Ground Track Repeat SAR Time Series: Fagradalsfjall Volcano, Reykjanes Peninsula, Iceland

Authors: Dr Melanie Rankl, Valentyn Tolpekin, Michael Wollersheim, Qiaoping Zhang, Angel Johnsy, Vincent Drouin
Affiliations: ICEYE OY, Icelandic Met Office
SAR Interferometry (InSAR) technology has been widely used in volcanic activity monitoring ranging from assessment of eruption risk and location prediction using deformation InSAR, monitoring eruptions and assessing the impact of lava flow using coherent change detection, and estimating the volume of hardened lava using topographic InSAR [1-2]. In this presentation we demonstrate the unique advantage of high-revisit SAR time-series collected by ICEYE’s Daily Ground Track Repeat (DGTR) configuration for lava progression mapping during the major eruption event of Fagradalsfjall Volcano, Reykjanes Peninsula, Iceland in 2021. On March 19th, 2021 a volcanic eruption began in Geldingadalur, close to Mount Fagradalsfjall ending a 800 years pause in eruptive activity on the Reykjanes Peninsula. Prior to the eruption increased seismicity had been recorded on the Peninsula since mid-December 2019. The series of earthquakes culminated in a magnitude MW 5.64 event on February 24, 2021 followed by a high rate of deformation due to inflow of magma into a vertical dyke. However, deformation and seismicity gradually decreased prior to the onset of the eruption on March 19th, 2021 [3]. The eruption lasted until September 2021. It was relatively small compared to other eruptions in Iceland. However, it had a bigger impact and proved more challenging for local civil protection agencies than other eruptions of its size as it was easily accessible and attracted 356,000 tourists [4]. In this presentation we focus on ICEYE’s DGTR SAR time-series covering the period March 2021 until March 2022 with a total of 296 Spotlight acquisitions (1 m ground resolution). The SAR images collected by DGTR have matching geometry, radiance and phase and offer a unique advantage for frequent and persistent monitoring of natural catastrophes. With this time-series we were able to monitor the full event in 2021 with a coherent acquisition almost every day. We show results of lava extent changes using amplitude and coherent change detection, and changes of the volume of deposited lava at different phases of the eruption using InSAR derived Digital Elevation Models (DEMs). The latter have been derived from multiple image pairs with 1-day temporal baseline and varying interferometric baselines, thus varying altitude of ambiguity. Best suitable interferometric baselines are presented and discussed. ICEYE DEM results are evaluated using aerial surveys performed multiple times in the same period. In addition, we present methods to mitigate the atmospheric phase delay on the phase signal. Without mitigating the atmospheric delay on the phase, significant errors might affect InSAR derived measurements of height and velocity [5,6]. Thanks to the long DGTR SAR time-series we use Persistent Scatterer Interferometry and Distributed Scatterer Interferometry to extract the Atmospheric Phase Screen by modeling it over time [7,8]. We also show results from averaging multiple DEMs derived from many interferograms in order to correct for the atmospheric noise. [1] Zhang, Q., Tolpekin, V., Wollersheim, M., Angeluccetti, I., Ferner, D., Fischer, P. 2022. Daily Repeat Pass Spaceborne SAR Interferometry for La Palma Volcano Monitoring [Conference presentation abstract]. 10th International Conference on Agro-Geoinformatics and 43rd Canadian Symposium on Remote Sensing, July 11-14, 2022, Quebec City, Canada [2] Drouin, V., Tolpekin, V., Parks, M., Sigmundsson, F., Leeb, D., Strong, S., Hjartardóttir, Á, Geirsson, H., Einarsson, P., Ófeigsson, B., 2022. Conduits feeding new eruptive vents at Fagradajsfjall, Iceland, mapped by high-resolution ICEYE SAR satellite in a daily repeat orbit. EGU22 General Assembly, May 23-27, 2022, Vienna, Austria. [3] Sigmundsson, F., Parks, M., Hooper, A., Halldór Geirsson, Kristín S. Vogfjörd, Drouin, V., Ófeigsson, B., Hreinsdóttir, S., Hjaltadóttir, S., Jónsdóttir, K., Einarsson, P., Barsotti, S., Horálek, J., Ágústsdóttir, T., 2022. Deformation and seismicity decline before the 2021 Fagradalsfjall eruption. Nature 609, 523–528. https://doi.org/10.1038/s41586-022-05083-4. [4] Barsotti, S., Parks, M.M., Pfeffer, M.A., Óladóttir, B.A., Barnie, T., Titos, M., Jónsdóttir, K., Pedersen, G., Hjartardóttir, Á. R., Stefansdóttir, G., Johannsson, T., Arason, Þ., Gudmundsson, M. T., Oddsson, B., Þrastarson, R. H., Ófeigsson, B. G., Vogfjörd, K., Geirsson, H., Hjörvar, T., von Löwis, S., Petersen, G. N., Sigurðsson, E. M., 2023. The eruption in Fagradalsfjall (2021, Iceland): how the operational monitoring and the volcanic hazard assessment contributed to its safe access. Nat Hazards 116, 3063–3092. https://doi.org/10.1007/s11069-022-05798-7. [5] Zhiwei Li, Meng Duan, Yunmeng Cao, Minzheng Mu, Xin He, Jianchao Wei, 2022. Mitigation of time-series InSAR turbulent atmospheric phase noise: A review, Geodesy and Geodynamics, Volume 13, Issue 2, Pages 93-103. [6] Zhiwei Li, Yunmeng Cao, Jianchao Wei, Meng Duan, Lixin Wu, Jingxing Hou, Jianjun Zhu, 2019. Time-series InSAR ground deformation monitoring: Atmospheric delay modeling and estimating, Earth-Science Reviews, Volume 192, Pages 258-284. [7] Hanssen, R.F., 2001. Radar interferometry: data interpretation and error analysis (Vol. 2). Springer Science & Business Media. [8] Liu, S., Hanssen, R.F., Samiei-Esfahany, S., Hooper, A. and Van Leijen, F.J., 2011. Separating non-linear deformation and atmospheric phase screen (APS) for InSAR time series analysis using least-squares collocation. In Proceedings of the Advances in the Science and Applications of SAR Interferometry, ESA Fringe 2009, Workshop ESA.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Studying the dike intrusion in the Fentale volcano (Ethiopia) via DInSAR and seismic data

Authors: Fernando Monterroso, Derek Keir, Alessandro La Rosa, Carolina Pagli, Hua Wang, Atalay Ayele, Elias Lewi, Martina Raggiunti, Manuela Bonano, Claudio De Luca, Pasquale Striano, Michele Manunta, Francesco Casu
Affiliations: IREA CNR, School of Ocean and Earth Science, University of Southampton, Department of Earth Sciences, University of Florence, Department of Earth Sciences, University of Pisa, College of Natural Resources and Environment, South China Agricultural University, Institute of Geophysics, Space Science and Astronomy (IGSSA), Addis Ababa University, National Institute for Geophysics and Volcanology (INGV), IREA CNR
In this study, we measured and modeled the ground displacements of the Earth's surface during an intense phase of magma and seismic unrest that occurred in September-November 2024 at the Fentale volcano in the northern Main Ethiopian Rift (MER), a young continental rift extending at 5 mm/yr. We used DInSAR displacement maps and seismic records during the Fentale intrusion in the MER to analyze the behavior of magma-assisted rifting at slow extension rates. The data indicate that, a ~10 km-long dyke of magma was injected into the rift, causing it to widen by 2 meters in three and a half weeks. The upper-crustal diking began propagating northward along the rift from mid-September 2024 and it was accompanied by seismic activity. Prior to the intrusion, from January 2021 to June 2024, the Fentale volcanic complex has experienced an uplift of up to 6 cm. We used descending Sentinel-1 acquisitions (Track 79) from the European Copernicus Program and ascending data from the Italian COSMO-SkyMed constellation to measure the line-of-sight (LOS) surface displacement in the Fentale region. Then the interferograms were inverted to quantify the spatio-temporal pattern of the dike intrusion and fault kinematics. DInSAR source modeling revealed an initial 3 km-long intrusion along the dike, starting 10 km northeast of the Fentale volcano. This intrusion was accompanied by low-intensity seismic activity. Between September 24th and October 18th, 2024, the deformation progressively expanded northward, becoming more complex. Our models suggest that the dike's opening increased to 2 meters and extended 8 km northward, accompanied by faulting above the dike and beyond its northern end. We complemented our models with analysis of global and local seismic recordings suggesting that dike propagation and opening accelerated from late September through the first week of October. DInSAR models indicate that the dike opening accounts for over 90% of the total geodetic moment release. However, the models also require the presence of normal faults to fully explain the observed deformation field. These evidences suggest that rapid magma movements, occurring approximately every hundred years, play a significant role in continental separation even in young rifts extending at slow rates. Further developments will be shown at the meeting.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: Geodetic Imaging as Monitoring Component of Santorini Volcano Observatory

Authors: Michael Foumelis, Dr Elena Papageorgiou, Costas Papazachos, Georgios E. Vougioukalakis, Christos Pikridas, Stylianos Bitharis, Jose Manuel Delgado Blasco, Giorgos Siavalas, Fabrice Brito, Fabrizio Pacini
Affiliations: Aristotle University Of Thessaloniki (AUTh), Hellenic Survey of Geology & Mineral Exploration (HSGME), Terradue s.r.l.
We are advancing efforts to complement Santorini Volcano's seismological, GNSS, and in-situ monitoring networks operated by ISMOSAV. The goal is to strengthen our near real-time capabilities in detecting unrest signals and documenting events previously undetectable without proper instrumentation or systematically acquired Earth Observation data. Utilizing Interferometric SAR measurements through both platform-based and in-house automated processing schemes, we address this requirement. Presently, the volcano seems to be in a post-unrest deflation since the 2011 unrest event. The ongoing development of a multi-parametric monitoring system, incorporating satellite, seismological, and in-situ observations, aims to robustly characterize the volcano's state. Our objective is to deepen our understanding of volcano’s dynamics and enhance alert capabilities for potential unrest. The system incorporates a user-friendly web interface for result visualization and dissemination, tailored to individual information needs. The Institute for the Study and Monitoring of the Santorini Volcano (ISMOSAV) is a non-profit organization established in the summer of 1995. Its primary objective is to continue the operation of the Volcanological Observatory and volcano monitoring networks. The main goal of ISMOSAV is to advance volcanic research on the island, specifically focusing on achieving the most accurate assessment of volcanic phenomena and increasing the likelihood of precise prediction of any future volcanic eruption. ISMOSAV operates a comprehensive monitoring system, crucial for timely prediction of a potential volcanic eruption. The permanent monitoring system (Fig. 1) incorporates local seismic and GNSS networks, as well as in-situ instruments to collect CO2 emissions and temperature at various depths, established and maintained mainly by the Aristotle University of Thessaloniki (AUTh) and the Hellenic Survey of Geology & Mineral Exploration (HSGME) with the support of local authorities. As part of its activities, ISMOSAV initiatives various communications to improve the local community's comprehension of the volcano and to make aware of its behavior. To further enhance the monitoring capabilities of the observatory, a geodetic imaging component based on InSAR was added to the existing ISMOSAV monitoring system. The InSAR monitoring component is expressed by different distinct solutions addressing both long-term monitoring aspects, as well as rapid response. For the long-term monitoring of Santorini volcano, the SNAPPING service of AUTh, integrated on the GEP platform was utilized. The operational SNAPPING services generate average Line-of-Sight (LoS) motion rate maps and displacement time series based on the Persistent Scatterers Interferometry (PSI) technique at both reduced spatial (PSI Med; at approx. 100 m) and full sensor resolutions (PSI Full). A well-defined set of processing parameters optimized to the specific environment are defined and a dedicated application on GEP is designed to execute SNAPPING PSI Med in a monitoring framework based on user defined temporal step. Rapid response solution is triggered when InSAR observations, real-time GNSS and in-situ networks provide relevant indications. During unrest events, near-real time PSI solutions can be generated whenever a new Sentinel-1 acquisition is available, based on both online hosted SNAPPING service, as well as in-house multi-temporal interferometric processing. Additionally, conventional differential interferograms are to be generated to access basic interferometric products, such as wrapped interferograms, alongside the advanced measurements where specific processing assumptions are considered. In this process, products are created and published at various spatial resolutions, with access levels tailored to the characteristics of the user. Following the testing of the system and the assessment of its operational performance our next steps are focused on developing a web graphical interface for visualization and dissemination of the results. This will include user-friendly tools for basic exploitation of measurements, co-visualization together with other data and finally dissemination options. The interface should address aspects of hierarchical access to measurements in a customized way depending on the level of information required by each user.
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: On-demand Sentinel-1 Interferogram Generation Service for Monitoring of Volcano Deformation

Authors: Raphael Grandin, Marie Boichu, Théo Mathurin, Nicolas Pascal, Roland Akiki, Jérémy Anger, Carlo de Franchis
Affiliations: Institut de physique du globe de Paris, Univ. Paris Cité, UMR 7154, CNRS, Laboratoire d'Optique Atmosphérique, UMR 8518, ICARE Data and Services Center, CNRS, CNES UMS 2877, Kayrros SAS
Sentinel-1 interferograms now represent standard products to assess the extent and magnitude of ground deformation in volcanic areas. These measurements provide quantitative constraints on the volume of material accumulating at depth in the magma reservoir, which is essential to anticipate the magnitude of an impending eruption [1]. However, the typical area of interest (AOI) for volcano analysis is much smaller than the 250km x 200km Interferometric Wide (IW) Sentinel-1 SAFE products, the latter spanning three adjacent subswaths and around 30 bursts. Existing InSAR services, such as the SNAPPING service on the Geohazards Exploitation Platform (GEP) [2] or CNES-FormaTerre Flatsim [3], can process entire Sentinel-1 products but do not allow to narrow down the processing to a small AOI within a single or a few consecutive bursts. To complement these already existing general-purpose services, there is a need for the development of an efficient and flexible tool capable of responding to the specific needs of volcano monitoring strategies. Institut de physique du globe de Paris (IPGP) and Université de Lille are developing an online open-access service for on-demand computation of Sentinel-1 interferograms over small AOIs centered on volcanic areas, accessible through a web application. The service back-end is deployed redundantly on the computing cluster S-CAPAD (IPGP) and in the AERIS/ICARE facility (Université de Lille). It relies on the EOS-SAR Python library developed at Kayrros. EOS-SAR implements an accurate Sentinel-1 geometric model [4] accounting for fine timing corrections, which allows to get native co-registration and stitching of bursts, resulting in a time series of well-aligned, geometrically consistent, Sentinel-1 bursts mosaics. The processing can be restricted to arbitrarily small AOIs, within a single or a few consecutive bursts and adjacent sub-swaths, which saves time, computing resources and storage space. The service leverages the Copernicus Data Space Ecosystem (CDSE) S3 object storage service for efficient data access. A Sentinel-1 image crop, located within a burst, can be read from a sub-swath measurement TIFF file, stored on S3, through a single http range request. The service front-end lets users select a volcano of interest, a Sentinel-1 ground track, and the list of dates to process. Existing ground tracks and dates for the selected volcano are retrieved from CDSE catalog APIs. Once the selection is made, a configuration file with the input parameters is sent to the back-end and triggers the processing. After processing completion, results are returned to the user via the interactive web interface, and products (interferograms, coherence maps, orbital fringes, topographic fringes, amplitude maps, etc…) can be downloaded. Planned developments include the optional correction of the atmospheric phase delay from the ERA-5 atmospheric model [5], retrieved via the COPERNICUS Climate Data Store API. The Sentinel-1 interferogram generation service for volcanic areas is developed as part of the “Volcano Space Observatory” platform, funded in the framework of the Horizon Europe, EOSC FAIR-EASE project [6], led by the French Research Infrastructure “Data Terra”. The service aims at offering a practical and efficient solution for the on-demand processing of InSAR products on volcanic targets. Anticipated end-users of the service include volcano observatory teams, scientists and researchers from academia and students training in the field of volcanology and remote sensing. - - - - - - - - - - - - - - - References [1] Shreve, T., Grandin, R., Boichu, M., Garaebiti, E., Moussallam, Y., Ballu, V., ... & Pelletier, B. (2019). From prodigious volcanic degassing to caldera subsidence and quiescence at Ambrym (Vanuatu): The influence of regional tectonics. Scientific Reports, 9(1), 18868. [2] Foumelis, M., Delgado Blasco, J. M., Brito, F., Pacini, F., Papageorgiou, E., Pishehvar, P., & Bally, P. (2022). SNAPPING Services on the Geohazards Exploitation Platform for Copernicus Sentinel-1 Surface Motion Mapping. Remote Sensing, 14(23), 6075. [3] Thollard, F., Clesse, D., Doin, M. P., Donadieu, J., Durand, P., Grandin, R., ... & Specht, B. (2021). Flatsim: The form@ ter large-scale multi-temporal sentinel-1 interferometry service. Remote Sensing, 13(18), 3734. [4] Akiki, R., Anger, J., de Franchis, C., Facciolo, G., Morel, J. M., & Grandin, R. (2022, July). Improved Sentinel-1 IW Burst Stitching through Geolocation Error Correction Considerations. In IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium (pp. 3404-3407). IEEE. [5] Jolivet, R., Grandin, R., Lasserre, C., Doin, M. P., & Peltzer, G. (2011). Systematic InSAR tropospheric phase delay corrections from global meteorological reanalysis data. Geophysical Research Letters, 38(17). - - - - - - - - - - - - - - - Acknowledgements Support from AERIS/ICARE Data and Services Centre, for the codevelopment of the « Volcano Space Observatory » platform, and Horizon Europe FAIR-EASE Project (Grant 101058785) are acknowledged. - - - - - - - - - - - - - - -
Add to Google Calendar

Tuesday 24 June 08:30 - 10:00 (Hall K2)

Presentation: When radar observation is needed: Unravelling long-term spatiotemporal deformation and hydrological triggers of slow-moving reservoir landslides

Authors: Fengnian Chang, Dr. Shaochun Dong, Dr. Hongwei Yin
Affiliations: School of Earth Sciences and Engineering, Nanjing University, COMET, School of Earth and Environment, University of Leeds
Active landslides pose significant global risks, emphasizing precise displacement monitoring for effective geohazard management and early warning. In China’s Three Gorges Reservoir Area (TGRA)—a pivotal section of the world's largest water conservancy project—unique hydrogeological conditions and reservoir operations have triggered thousands of landslides [1]. Many of these landslides exhibit a north-south orientation and are covered by seasonal vegetation, posing challenges to conventional remote sensing-based displacement monitoring, especially in estimating three-dimensional (3D) deformation and long-term displacement time series. To address these challenges, we propose a framework that integrates interferometric synthetic aperture radar (InSAR), pixel offset tracking (POT), stacking, and topography-constrained model. This approach leverages phase and amplitude information from multi-platform, multi-band SAR datasets (i.e., L-band ALOS-1, C-band Sentinel-1, and X-band TerraSAR-X). Using this framework, we investigated the long-term spatiotemporal deformation and evolution mechanisms of two slow-moving, north-south-oriented reservoir landslides in the TGRA. First, we applied the SBAS InSAR method to ascending ALOS-1 (2007–2010) and Sentinel-1 (2015–2021) data to reconstruct LOS displacement velocities and time series [2]. POT and stacking methods were used with TerraSAR-X data (2019–2021) to derive azimuth velocities [3]. Due to the exclusive availability of ascending orbit data in the study area, we combined LOS and azimuth velocities during the overlapping period under the surface parallel motion (SPM) assumption to reconstruct average 3D velocity fields of the landslides [4]. From these fields, we determined the average sliding direction for each landslide pixel over the monitoring period, which served as a reference for projecting LOS displacement time series. This approach enabled the reconstruction of the actual landslide deformation evolution along the average sliding direction. By introducing temporal constraints, we bridged a five-year observation gap between ALOS-1 and Sentinel-1, reconstructing—for the first time—the 15-year displacement evolution of the landslides pre- and post-reservoir impoundment. Our findings reveal spatiotemporal heterogeneity in landslide deformation driven by hydrologic triggers. The reservoir impoundment in September 2008 induced transient acceleration in both landslides, followed by a relatively stable, step-like deformation pattern influenced by rainfall and reservoir water level (RWL) fluctuations. Finally, using Singular Spectrum Analysis (SSA) [5] and cross-correlation analysis, we quantitatively assessed the response of landslide deformation to hydrologic triggers. Rainfall, with a lag of approximately 20 days, predominantly influenced both landslides, while RWL fluctuations primarily affected deformation at landslide toes. Notably, the impact of RWL diminished with increasing distance from the reservoir, with lag times ranging from 8 to approximately 40 days. This quantitative characterization of landslide responses to hydrologic triggers represents a crucial step towards improved hazard mitigation capabilities. References: [1] Tang, H.; Wasowski, J.; Juang, C.H. Geohazards in the three Gorges Reservoir Area, China–Lessons learned from decades of research. Eng. Geol. 2019, 261, 105267. [2] Berardino, P.; Fornaro, G.; Lanari, R.; Sansosti, E. A new algorithm for surface deformation monitoring based on small baseline differential SAR interferograms. IEEE Trans. Geosci. Remote Sensing 2002, 40, 2375-2383. [3] Chang, F.; Dong, S.; Yin, H.; Ye, X.; Zhang, W.; Zhu, H.-h. Temporal stacking of sub-pixel offset tracking for monitoring slow-moving landslides in vegetated terrain. Landslides 2024, 1-17. [4] Samsonov, S.; Dille, A.; Dewitte, O.; Kervyn, F.; d'Oreye, N. Satellite interferometry for mapping surface deformation time series in one, two and three dimensions: A new method illustrated on a slow-moving landslide. Eng. Geol. 2020, 266, 105471. [5] Vautard, R.; Ghil, M. Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series. Physica D 1989, 35, 395-424.
Add to Google Calendar

Tuesday 24 June 09:00 - 10:00 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Connectivity & Secure Communications, Navigation, Space Safety

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Chairs:


  • Dieter Grebner - Peak Technology
  • Hans Martin Steiner - Terma Technologies
  • Georg Grabmayr - Beyond Gravity



Add to Google Calendar

Tuesday 24 June 09:00 - 09:20 (EO Arena)

Demo: C.01.27 DEMO - Sen2Like Tool & data harmonization workflow

The Sen2Like demonstration processor has been developed by ESA in the framework of the EU Copernicus programme (https://www.copernicus.eu/).
The main goal of Sen2Like is to generate Sentinel-2 like harmonised/fused surface reflectance products with an higher periodicity thanks to the integration of additional Senintel-2 compatible optical mission sensors.
The Sen2Lke software meet expectations of community regarding production of fit for purposes multi-source spatiotemporal dataset, so called Analysis Ready Data (ARD).
With this scope, the Sen2Lke software accomplish standardized pre-processing steps derived from Calibration / Validation algorithms. With this approach, user is discharged from complexity of algorithm and s/w development/implementation and become, confidently, focused on its own thematic analysis.
The Sen2Like software delivers Copernicus Sentinel-2 L2H/ L2F products
( https://sentinels.copernicus.eu/sentinel-data-access/sentinel-products/copernicus-sentinel-2-msi-level-2h-and-level-2f-1 ) . Products are generated for a given temporal period and geographic location as specified by the user.
Basically, the Sen2Like software has been designed as a processing framework. Also, the user is able to configure processing workflow; processing algorithms can be selected, removed and for some of them tuned. Processing algorithms address many Cal / Val topics, as for instance geometric correction, radiometric calibration, spectral correction, BRDF correction, slope correction and data fusion
One major objective of Sen2Like ARD is to ease the analysis of temporal changes. The Sen2Like processing enable pixel-based analysis even if data stream is from different missions. Moreover, Sen2Like approach makes user able to perform multi-year analysis. Finaly, harmonization of data leads to temporal noise reduction, and de facto enable detection of short-term changes.
The scope of this training is to demonstrate the value added of Sen2Like tool in the context of multi temporal analysis. Use cases are defined in such a way that for a given location, temporal period, results obtained with different workflow are computed. We should demonstrate that harmonization of data becomes important for certain application types.
The break down of the training is as follow:
• General introduction regarding the sen2like tool
• Definition of processing workflow as part of configuration
• Selected Test data set (Glacier Area, Amazonia, Maricopa Fields …)
• Region of interest definition and use case definition
• Inspect and discussed time series from use case results

Speaker:


  • Sébastien Saunier
Add to Google Calendar

Tuesday 24 June 09:22 - 09:42 (EO Arena)

Demo: C.04.03 DEMO - Handling observations in BUFR format

BUFR (Binary Universal Form for data Representation) is a data format maintained by the WMO. It is self-describing and uses tables to encode a wide variety of meteorological data: land and ship observations; aircraft observations; wind profiler observations; radar data; climatological data. Therefore, it is used as a the primary data format for operational real-time global exchange of weather and satellite observations.

This tutorial is designed to enhance participants' understanding and practical skills in the encoding and decoding of meteorological data in BUFR. In order to efficiently handle BUFR data participants will learn how to use ecCodes library developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and it's python API.

The tutorial begins with a comprehensive introduction to the BUFR format, including its structure and definition of its descriptors and templates. Participants will learn how to use ecCodes tools for command-line operations. Through practical exercises, they will learn to decode BUFR messages and extract relevant data by developing python software for automated data processing.

Additionally, participants will also explore best practices for encoding meteorological datasets in BUFR by applying WMO observations data governance.

By the end of the tutorial, attendees will be equipped with technical understanding of BUFR and ecCodes, allowing them to efficiently use this knowledge for data processing.

Speaker:


  • Marijana Crepulja - ECMWF
Add to Google Calendar

Tuesday 24 June 09:45 - 10:05 (EO Arena)

Demo: C.01.25 DEMO - DGGS: Scalable Geospatial Data Processing for Earth Observation

{tag_str}

Objective:
This demonstration will introduce the DGGS (Discrete Global Grid System) framework, highlighting its ability to process and analyze large Earth Observation (EO) datasets efficiently. The demo will focus on DGGS’ scalability, data accessibility, and potential to improve EO workflows by leveraging hierarchical grid structures and efficient data formats like Zarr.

Demonstration Overview:
Introduction to DGGS:
Brief overview of the DGGS framework and its hierarchical grid system designed to handle large-scale geospatial data efficiently.
Application to Earth Observation Data:
Demonstrating DGGS' ability to transform and process EO datasets, with an emphasis on its potential for improved data storage and access.
Visualization and Analytics:
Showcasing basic visualization and analytic capabilities within the DGGS framework, demonstrating its ease of use for EO data exploration.
Future Potential:
Explaining and discussing how DGGS could enhance future EO workflows, particularly for climate monitoring and large-scale environmental data analysis.
Format:
The presenter will guide the audience through the demonstration, highlighting DGGS' features and potential for real-world applications.
A short Q&A session will allow for audience interaction.
Duration:
20-minute slot.
This demonstration will showcase DGGS as a promising tool for scalable and efficient Earth Observation data processing, offering a glimpse into its potential applications and future benefits.
Add to Google Calendar

Tuesday 24 June 10:00 - 11:30 (Plenary)

Session: Breaking Barriers by Working Together in Earth Science

This Plenary session will convene international partners who already have good connection(s) to ESA/EOP and bring them together for a discussive panel session on international cooperation. The session will focus on what barriers, if any, are present in creating and maintaining effective partnerships, either bilaterally or multilaterally. Examples of successful collaboration are expected to be reported and celebrated during the session, but those that haven’t worked are also welcomed to form part of the narrative. When there have been issues/barriers, practical measures that have been put in place can be tabled so that others can learn from these experiences and mitigate against any similar situations in the future. Looking ahead, panellists will be asked to identify any problem areas on the horizon that are of concern and how can we mitigate for these as a collective EO community and support more effective collaboration on EO moving forwards.


Panel Members


  • Karen St Germain - NASA, Earth Science Division Director
  • Hironori Maejima - JAXA, Senior Chief Officer of EO missions
  • Meshack Ndiritu - Africa Space Council, Capacity Coordinator
  • Lorant Czaran - UNOOSA, Scientific Affairs Officer
  • Paul Bate - UKSA, DG - CEOS Chair
  • Ariel Bianco - PhilSA, Director Space Information Infrastructure Bureau
  • Christian Feichtinger - IAF, Executive Director
  • Pakorn Apaphant - GISTDA, Executive Director
Add to Google Calendar

Tuesday 24 June 10:00 - 11:30 (ESA Agora)

Session: E.03.02 New approaches to support commercialisation

In an effort to address the evolving needs of public institutions, leveraging Earth Observation data has become increasingly vital. In today's competitive landscape, where rapid development and deployment are essential, the public-private partnership framework emerges as a promising new approach. This model not only facilitates the efficient use of resources but also fosters innovation through collaboration between public entities and private sector players.

The upcoming session will feature a dynamic panel discussion, bringing together a diverse group of stakeholders from both industry and institutional backgrounds. This gathering aims to explore the applications of the public-private partnership model, delving into its potential benefits and associated risks. By fostering an open dialogue, the session seeks to uncover how these partnerships can drive advancements in Earth Observation technologies and their applications.

The discussion will also address the challenges that may arise, such as aligning the goals of public and private entities, managing intellectual property, and ensuring equitable access to data and technology.

The panel will also tackle potential negative aspects of this model. By examining both the strengths and limitations of the model, the session aims to provide a comprehensive understanding of its role in leveraging Earth Observation data to meet the needs of public institutions, paving the way for innovative solutions in an ever-evolving landscape.
Add to Google Calendar

Tuesday 24 June 10:00 - 10:45 (Nexus Agora)

Session: F.05.07 Women Trailblazers Round Tables - Session 1

“Women trailblazers: the present and the future of ...” is a series of round tables focused on the specific disciplines of Earth Observation Remote Sensing, Science, Engineering, Policy making, Entrepreneurship etc. that recognizes women’s significant contributions including senior and young promising professionals, whilst promoting female role models in our community.

Proposal for the “2025 Living Planet Symposium” is to host 6 Round Tables in the Agora (one or two per day) dedicated to each of the thematics of the Living Planet Symposium:
• Earth Science Frontiers
• Climate Action and Sustainability Challenges
• Earth Observation Missions
• Digital Innovation and Green Solutions
• Partnership with Industry for New Applications
• Global Cooperation and Policy Support

Moderators:


  • Luisella Giulicchi - ESA

Speakers:


  • Aarti Holla-Maini - UNOSAA-Director
  • Kallianou de Jong - Fani European Bank for Reconstruction and Development (EBRD)-Officer
  • Rakiya Babamaaji - NASRDA Nigeria- Director
* Susanne Mecklenburg - ESA- Head Climate & Long-Term Action Division
  • Dr. Karen M. St. Germain (TBC) NASA-Earth Science Division Director
Add to Google Calendar

Tuesday 24 June 10:00 - 10:45 (Frontiers Agora)

Session: E.01.09 Space for Energy Sector Transformation, Sustainability, and Resilience

Transformation of the energy sector is crucial for a sustainable and green future, relying on a low-carbon energy mix and enabling sustainable development, economic growth, and resilience.

This session explores the role of space technology in driving the transformation of the energy sector, underpinning integrated solutions to support decision-making and operational processes for the energy transition. Through expert insights from different stakeholder groups in the energy sector, the session will shed the light on opportunities for the adoption and scaling of space solutions and identify barriers which must be overcome. The scope is broad and will include societal, technical, business, and regulatory challenges.

Discussions will address how innovative space technologies, digitalisation and artificial intelligence are impacting the energy sector and how to fully leverage their potential. The session will also discuss collaboration opportunities between the space and the energy sector, laying the ground for further networking among diverse energy actors from both the supply and demand sides.

Chairs:


  • Richard Eyers - Richard Eyers Geoscience & Photography
  • Zaynab Guerraou - ESA

Speakers:


  • Maziar Golestani - Head of Metocean & Site and System Design Project Management, Vattenfall
  • Itziar Irakulis Loitxate - IMEO Scientist, UNEP
  • Julien Fiore - Remote Sensing Team Lead, TotalEnergies France
  • Werner Hoffman - Head of Institute for Strategic Management, WU Wien.
Add to Google Calendar

Tuesday 24 June 10:07 - 10:27 (EO Arena)

Demo: D.04.14 DEMO - ESA WorldCereal: Effortless Crop Mapping from Local to Global Scales

Join the ESA WorldCereal team for a dynamic demonstration showcasing the system's capabilities in generating precise crop maps by integrating public and private reference data. Designed for researchers, policymakers, and stakeholders, this session will illustrate how to efficiently create tailored cropland and crop type maps using WorldCereal’s user-friendly tools.
We will begin with an introduction to the cloud-based WorldCereal processing system, an open platform for training and applying cropland and crop type detection models using open Earth Observation and complementary datasets. Attendees will learn how to access and integrate public and private reference datasets from the WorldCereal Reference Data Module to train their own models.
The demonstration will include a step-by-step walkthrough of the WorldCereal Processing Hub, a web interface that simplifies the launch and monitoring of cloud-based processing jobs. Participants will observe how to initiate crop mapping tasks directly from the hub, streamlining workflows and boosting productivity. For users preferring a Python environment, we will also showcase how Jupyter Notebooks support flexible and customized model training and processing.
Throughout the session, we will highlight the system’s support for diverse crop types, its adaptability to various geographic regions, and its capability to produce high-resolution, seasonally updated crop maps at a 10-meter spatial resolution. These features are invaluable for agricultural monitoring, food security assessments, environmental research, and policymaking.

Speakers:


  • Kristof Van Tricht - VITO
  • Jeroen Degerickx - VITO
Add to Google Calendar

Tuesday 24 June 10:30 - 10:50 (EO Arena)

Demo: A.08.18 DEMO - OVL-NG portals: online web portals for EO data discovery

Numerous new satellites and sensors have arisen during the past decade. The satellite constellation has never been better, providing us with a wide range of view angles with the ocean surface from the coast to the open ocean, at various scales and from physical to biological processes. A good example is the Sentinel 1-2-3 program that covers various sensors such as SAR, optical, infrared or altimeter with a repeat subcycle of only a few days.

OVL-NG portals are publicly available portals allowing anyone to visually explore a large amount and variety of EO data, without the difficulty of handling huge and heterogeneous files.
OVL-NG also offers some drawing and annotation capabilities, as well as the ability to create web links that users can share to communicate about beautiful oceanic structures or use as support for discussing interesting cases with other scientists.
There is also the capability to easily share analysis and interesting test cases using short link or the SEAShot tool (https://seashot.odl.bzh )

During this demo, we will showcase how you can navigate in time and space to explore the synergy between the different Sentinel sensors (e.g.https://odl.bzh/Y_d9phB9 ) or compare different sources of current derived from model, in-situ and satellite (e.g. https://odl.bzh/uWiicyJO ) using drawing capabilities and share your analyses using SEAShot tool.

Discussions and feedback are more than welcome and will drive the future evolutions of these tools, so don't hesitate to come at the ESA booth and discuss with us!

Add to Google Calendar

Tuesday 24 June 10:45 - 11:30 (Frontiers Agora)

Session: D.03.08 Open Science in the Making

Open Science is made live, at LPS, at the “Open Science in the Making” booth! Here scientists and open source software developers will meet to discuss best practices in Open Science, and collaborate to build user-driven expansions to open source tools and algorithms, integrate and share their applications and data, discuss on the most popular Open Source projects helping scientific research and digital innovation advance further!

This Agora will present the tools and projects the “Open Science in the Making” booth will focus on, its organization, and the opportunities for engaging with strategic initiatives like EarthCODE and APEx or EOEPCA by joining in it. Participating in the “Open Science in the Making” activities will be an excellent opportunity to collaborate, learn about the potentials of Open Science and Free and Open Source Software (FOSS) to support your own activities, and, why not, sharpen your coding skills!

A variety of ways to contribute during the “Open Science in the Making” will be showcased in this Agora, such as test code, file and fix bugs, propose and add new features, improve documentation, or just ask more info to the developers about a FOSS software or Open Science project and tools, their inner workings and how they can fit your use case.

At this Agora, you will also be able to discuss the “Open Science in the Making” booth agenda, which will include experts coming from different projects and activities, such as the EarthCODE initiative, the APEx platform, the EOEPCA Building Blocks, popular OSGeo softwares, open standards, etc... Come to this Agora or pass by the “Open Science in the Making” booth to know more!

Speakers:


  • Salvatore Pinto - ESA
  • Anca Anghelea - ESA
Add to Google Calendar

Tuesday 24 June 10:52 - 11:12 (EO Arena)

Demo: D.02.25 DEMO - Freedom to apply complex calculations and ML models on EO data

Machine Learning (ML) is revolutionizing Earth Observation (EO), but deploying models efficiently and at scale can be challenging. How can we simplify this for researchers and developers?
To make ML more accessible for EO practitioners, openEO supports the concept of user-defined functions (UDFs). Furthermore, in order to remain lightweight, an openEO backend does not have the necessary dependencies to run the model.
This session will demonstrate how to bring ML into your EO processing chain using openEO's standardized interface. No ML expertise is required—just an interest in leveraging scalable AI solutions for geospatial analysis. If you’re curious about scalable, efficient, and portable AI for geospatial applications, this session is for you.

Why Attend?
• Unlock Scalable AI for EO: Learn how to apply advanced ML models to EO data without needing heavy infrastructure or expert-level ML knowledge.
• Run Anywhere with ONNX: Discover how openEO leverages the ONNX format to deploy models flexibly across backends.
• Customize with UDFs: See how user-defined functions enable powerful, tailored processing within the openEO ecosystem.
• Simplify Deployment: Avoid complex setup—process your models server-side without worrying about dependencies.

Join us to see how openEO + ONNX + UDFs can make your geospatial ML workflows smoother, faster, and more scalable than ever!

Speakers:


  • Hans Vanrompay - VITO
Add to Google Calendar

Tuesday 24 June 11:15 - 11:35 (EO Arena)

Demo: D.04.19 DEMO - Visualizing Sentinel satellite imagery and data products in desktop GIS with the Copernicus Data Space Ecosystem QGIS Plugin

Quantum GIS (QGIS) is a widely used open source desktop geographical information system sofware. The Copernicus Programme offers open and free satellite imagery. These two resources can be connected with the Copernicus Data Space Ecosystem QGIS Plugin. This tool is powered by the Sentinel Hub API family and the OGC Standard. The plugin enables users to view and download Sentinel imagery, filtering with dates and cloud cover. This tool is particularly suited for visual interpretation of satellite imagery and raster-vector integration. It is directly available in the QGIS plugin repository, and connects with the user's Copernicus Data Space Ecosytem account.

Speakers:


  • András Zlinszky - Community Evangelist, Sinergise Solutions GmbH
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Session: D.01.03 Synergies between ESA DTE Programme and DestinE Ecosystem

The content of this session shows the potential of dynamic collaborations between ESA DTE Programme activities and the opportunities provided by DestinE Platform. The session includes presentations about the capabilities available in DestinE Platform, and the defined framework to grow the ecosystem of services through onboarding opportunities for ESA and non-ESA activities. It also includes presentations on the pre-operational innovative services and applications developed under ESA DTE activities (such as the Digital Twin Components) and their synergies with DestinE Platform.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Empowering Climate Insights: Integrating Digital Twin Earth and DestinE Services through DEA's API-Driven Storytelling

Authors: Cristina Arcari, Arturo Montieri, Monica Rossetti
Affiliations: Alia Space Systems
Destination Earth (DestinE) Platform makes available innovative services aimed at exploiting the potential of the Destination Earth initiative. The goal of the initiative is to build a near-real digital twin of our planet so that it is possible to simulate environmental changes at an unparalleled level of detail. The European Space Agency (ESA) Digital Twin Earth programme aims to underpin this ecosystem, making the capabilities offered by the latest Earth Observation (EO) satellite missions freely available to all users. The integration between the services provided by the ESA Digital Twin Components (DTC) and the services offered by the DestinE Platform is crucial for maximizing the effort to make citizens and policymakers aware of the consequences related to climate change and give insights into planning effective mitigation strategies. In this context, DEA, the DestinE web-based storytelling service, was developed with the ambition to make data understandable for users, allowing them to craft engaging stories as interactive presentations by combining datasets provided by the tool with their own assets. Although DEA has a graphical interface, it is designed with an API-driven approach. This feature facilitates the programmatic creation of stories using the exposed APIs. Moreover, DEA also provides dedicated endpoints for generating standard plots and qualitative graphs such as climate spirals, climate stripes, and anomaly bars on both global and local scales. In this way, users can easily integrate these visualizations into their stories or utilize them in various contexts and applications to emphasize topics related to climate change. Similarly, most of the DTC and the DestinE services provide APIs that can be exploited by DEA to generate new content or to collect data ready to be shown in a story. This capability fosters the service chaining by integrating Digital Twin Components and DestinE services and encourages the development of applications based on Artificial Intelligence (AI) and agents based on Large Language Models (LLMs). We propose to demonstrate how to create seamless workflows combining features offered by the Digital Twin Earth and DestinE services with the APIs provided by DEA to generate visual insights.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Earth’s Digital Future: Insights into Destination Earth and ESA DTE

Authors: Franka Kunz
Affiliations: ESA
Destination Earth (DestinE) and the ESA Digital Twin Earth (DTE) Programme are transforming the landscape of Earth observation and simulation, offering unprecedented tools to address global environmental challenges. These initiatives mark a significant leap forward in leveraging digital technologies to understand, predict, and respond to complex planetary phenomena. The Destination Earth initiative, led by the European Commission, aims to create a high-precision digital replica of the planet. DestinE is designed to empower decision-makers with actionable insights by providing advanced simulations and predictive capabilities for diverse applications, including climate change mitigation, disaster risk reduction, and sustainable development. With its cutting-edge platform and technical framework, DestinE establishes a robust foundation for understanding and managing Earth's systems. The ESA Digital Twin Earth (DTE) Programme, part of the ESA Earth Watch Programme, complements and strengthens DestinE by advancing the use of Earth Observation (EO) data and technologies to create pre-operational digital twins that monitor and predict environmental changes. ESA DTE demonstrates their value for applications including agricultural management, urban development, and environmental management. By leveraging data from ESA’s Earth Explorer Missions and integrating it into the DestinE Platform, the program ensures high-quality EO data serves as the foundation for advanced digital twin development. The collaboration between DestinE and the ESA DTE Programme is crucial for advancing their objectives. DestinE serves as a comprehensive platform for integrating and operationalising digital twin applications, while the DTE Programme contributes by developing pre-operational digital twins and providing high-quality Earth Observation data. These collaborations enhance the overall digital twin ecosystem, enabling the integration of advanced Earth observation data, AI-driven analytics, and simulation technologies. This synergy fosters innovation and ensures the ecosystem supports diverse applications, from real-time decision-making to long-term strategic planning. This overview of both programs highlights their complementary roles in advancing Earth system science and fostering a sustainable future. By uniting cutting-edge technologies, high-quality data, and innovative frameworks, DestinE and ESA DTE are paving the way for a new era in Earth observation, simulation, and decision-making.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Synergies between ESA DTE Programme and DestinE Ecosystem: The Role of the HIGHWAY Project

Authors: Luca Girardo, Henry de Waziers
Affiliations: Esa Esrin, adwäisEO
The European Space Agency’s (ESA) Digital Twin Earth (DTE) Programme serves as an innovative research companion to the operational DestinE platform, fostering dynamic collaborations that enhance Earth observation capabilities. At the forefront of this collaboration is the HIGHWAY project, which plays a crucial role in transforming and harmonizing Earth Explorer datasets into Digital Twin Analysis Ready Cloud Optimized (DT-ARCO) formats. This process ensures the seamless integration of diverse datasets into the DestinE platform, enabling advanced digital twin simulations and the development of pre-operational services. HIGHWAY contributes to both ESA and non-ESA activities by aligning its data transformation and harmonization efforts with evolving cloud-optimized formats for Copernicus and Earth Explorer missions. Through this harmonization, datasets from missions such as SMOS, Proba-V, Aeolus, CryoSat-2, and EarthCARE are made accessible to the DestinE platform via OpenSearch, WMS, and WCS interfaces. These efforts support the expansion of DestinE’s ecosystem, facilitating the onboarding of additional services and fostering interoperability across the platform. Additionally, HIGHWAY is exploring the integration of High-Performance Computing (HPC) systems, particularly in the context of harmonizing workflows between the HPC MeluXina system and the DestinE Core Service Platform (DESP). This integration is key to enabling the large-scale data processing and real-time simulations required for sophisticated digital twin applications. By bridging HPC and cloud-optimized data infrastructure, HIGHWAY is setting the stage for innovative pre-operational services that align with DestinE’s mission. In this session, we will explore how HIGHWAY’s role in transforming and harmonizing datasets, along with its contributions to HPC integration, strengthens the synergy between ESA DTE activities and the DestinE platform. Attendees will gain insights into how these efforts are unlocking new opportunities for service growth, collaboration, and the future of Earth observation services within the DestinE ecosystem.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: DestinEStreamer - a new paradigm for storing, disseminating and accessing big data in Earth Observation and Climate Science domains

Authors: Andreas Walli, Dr. Wolfgang Kapferer
Affiliations: Geoville
PROBLEM The growing volume of Earth Observation and climate data presents significant challenges in storing, managing, and processing this information. As satellite and sensor technologies advance, and as more missions are launched, the amount of data generated in the field of Earth Observation is increasing exponentially, leading to unprecedented storage demands. A similar trend is evident in Climate Sciences, where advancements in supercomputing facilities enable higher-resolution simulations in both space and time, generating vast volumes of data. This surge is accompanied by rising costs for data storage, dissemination, and increasing complexities in accessing and distributing the information. Ensuring timely and equitable access to this vital data across sectors such as research, industry, and decision-making is becoming increasingly difficult as access challenges and costs continue to rise. Traditional storage solutions, such as single artifact based object stores with millions of individual files, are insufficient to meet the needs of modern machine learning and artificial intelligence applications. Therefore, there is an urgent need for innovative solutions that streamline data handling, reduce complexity and costs, and enable more efficient and effective use of this valuable data. LEARNING FROM OTHER INDUSTRIES When it comes to data dissemination over the internet, one industry dominates: video streaming. Nearly one half of the global internet traffic share accounts for video streaming services (Sandvine 2023). Cross-Industry Innovation The streaming revolution was introduced by several major advances, such as: • next generation compression with algorithms to reduce the amount of data but preserve the quality of the content, • adequate video container formats, which determine how different types of media (video, audio, subtitles, etc.) are packaged together into one object and • broad accessibility to the streams via standardization like HTML5 Video APIs or standard libraries (e.g. OpenCV). The streaming industry has achieved remarkable scalability, serving hundreds of millions of users globally across diverse devices. Could the key success factors of this industry be harnessed to revolutionize scientific domains such as Earth Observation and Climate Sciences? The answer is yes. By adopting similar technologies and software architectures, Earth Observation and Climate data can be compressed, packaged, and disseminated with unprecedented efficiency. This process requires innovative preprocessing techniques to optimize data for next-generation compression algorithms, robust metadata management, and an adaptable access layer. Such a layer, built with libraries and web applications, enables seamless transformation of data into target formats like GeoTIFFs or in-memory representations such as arrays. This is precisely the work carried out by the DestinEStreamer service, demonstrating how these advancements can drastically reduce archive sizes, accelerate data access and downloads, and unlock new edge-device capabilities. Ultimately, this approach paves the way for groundbreaking services in Earth Observation and Climate domains, yet to be imagined. ENHANCING THE DESTINE PLATFORM WITH THE DESTINESTREAMER SERVICE The DestinEStreamer service significantly enhances the capabilities of the DestinE platform by achieving impressive compression ratios of 1:14 to 1:27 compared to the original, already-compressed datasets. This enables the platform to store and deliver a far greater volume of online datasets. These results are achieved through meticulous preprocessing of the data and aligning their temporal structure to leverage state-of-the-art video compression codecs. The core approach involves a block-based transformation, organizing data into three frame types (I-frames, B-frames, and P-frames) within a repeating Group of Pictures (GOP) sequence. By removing temporal redundancies, especially for Earth Observation and Climate data that naturally align as temporal stacks over the same spatial region, this technique enables exceptionally high compression ratios. While the compression is lossy, it allows adjustable quality settings. In the DestinEStreamer service, the default configuration ensures a Structural Similarity Index (SSIM) above 0.99 and less than 0.01% data difference—delivering high fidelity. With these settings, storage capacities increase over 30-fold, and network bandwidth can handle 30 times more data. Decompression and data access are as seamless as streaming a movie (e.g., 24 frames per second). HOW TO USE THE SERVICE The DestinEStreamer service provides access to data streams on the DestinE Platform through multiple vectors: • Web Application: This interface enables users to explore dataset variables (e.g., ERA5 and Climate Digital Twins) as visual streams, facilitating fast temporal scans and interactive map capabilities. Within the application, users can follow links to the Jupyter Hub and the Insula service for deeper analysis. A dedicated Python module, along with example scripts, is provided to support data conversion. These scripts illustrate how to extract specific timesteps or time series, georeference the data, and interact with it via collections like xarray. • API: For each dataset variable, an API provides comprehensive metadata and quality metrics, ensuring transparency and facilitating automated workflows. As part of ESA’s initiative, DUNIA, the technology supports continental-scale data delivery for Africa, including Sentinel-1 and Sentinel-2 datasets. This access vector is designed for low-bandwidth environments, enabling efficient data streaming to clients, even on mobile devices. The service also includes features to mitigate the effects of unstable network connections, allowing users to download and convert Sentinel data-streams with minimal bandwidth requirements. SUMMARY AND OUTLOOK The DestinEStreamer service, with its advanced access methods, delivers a groundbreaking solution for storing and disseminating massive volumes of georeferenced raster and climate simulation data. By leveraging cutting-edge technologies from the over-the-top video streaming industry, it introduces a transformative approach to Earth Observation and Climate data management. This innovative technological foundation opens the door to entirely new, previously unimaginable services. Combined with the capabilities of edge computing devices within distributed infrastructures, DestinEStreamer paves the way for the applications and services of the future, enabling a fully connected and integrated Earth Observation- and Climate-Data ecosystem.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: ESA EO-based Digital Twin Components of the Earth System

Authors: Dr Martin Wearing, Dr Diego Fernandez Prieto
Affiliations: ESA, ESRIN
The ESA Digital Twin Earth (ESA DTE) programme aims at ensuring that the latest Earth observation satellite capabilities play a major role in the design, implementation and future evolution of DestinE; the flagship initiative of the European Commission to develop a highly accurate digital twin of the Earth to monitor and simulate natural phenomena, hazards and the related human activities. This presentation with provide an overview of the progress and achievements so far of ESA’s EO-based Digital Twin Components (DTCs), which each focus on a particular element of the Earth system. Current activities are split into Lead and Early Development Actions, with themes covering: - Lead Development Actions; Agriculture, Forests, Hydrology and hydro-hazards, Ice Sheets and regional/global impacts, Coastal processes and extremes; and - Early Development Actions; Air quality & health, Arctic (land and ocean), Energy sector, Geo-hazards, Mountain glaciers, Urban areas and smart cities. These DTCs will offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making. This overview will highlight the progress of DTC activities, the opportunities for the use of EO data in digital twins, the use of DestinE services and implementation of DTCs in the DestinE ecosystem, engagement with users, stakeholders and development of use cases, along with recommendations for future developments.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K1)

Presentation: Unlocking the potential of Destination Earth: An analysis of how ESA DTE could strengthen Destination Earth attractiveness

Authors: Dr Rochelle Schneider, Christophe Taillandier, Madleen Bultez, Pierre Arnaud, Nicolas Monzali
Affiliations: ESA, MEWS Partners
The European Commission’s Destination Earth (DestinE) initiative represents a groundbreaking effort to simulate and analyse Earth's processes with unprecedented accuracy. It aims to provide a scalable platform and services to enable users to interact with digital twins for decision-making in areas like climate action, sustainability, and disaster resilience. By simulating specific Earth systems using real-time Earth Observation (EO) data from ESA satellites and other sources, the ESA Digital Twin Earth should serve as a technical enabler for DestinE by completing the foundational digital twin technology and scientific methods. While ESA DTE primarily targets researchers and experts focused on Earth system science, climate modeling, and high-end computational simulations, there exist strong potential synergies with DestinE that could be leveraged to improve DestinE attractiveness. In particular, both systems aim to address critical societal challenges, from climate change mitigation to disaster management, by enabling data-driven decision-making at scale. Strong technological synergies exist between the two initiatives. For instance, ESA DTE provides key digital twin technology that could then be deployed in real world applications through DestinE. This proposed presentation will explore the synergies of the two initiatives with the objective of improving DestinE attractiveness. It will analyse design, accessibility, usability, and value proposition aspects and highlight possible recommendations. In particular, the following dimensions will be assessed. It begins with APIs and interfaces, evaluating the effectiveness of interfaces and tools to access the two systems. Then, it explores data synergies with ESA’s DTE, highlighting the enhanced value generated by integrating ESA’s DTE data assets with DestinE’s advanced simulation capabilities, enabling deeper insights and more robust decision-making across applications. This is followed by an analysis of user engagement synergies, investigating the drivers and barriers to fostering collaboration and cross-fertilization between ESA’s DTE and DestinE, with a focus on unlocking shared value for diverse user groups with varying needs and expertise. Finally, the impact potential is assessed by evaluating the tangible outcomes of potential new use cases, leveraging the ‘best of both worlds’ from DTE and DestinE. Our findings will aim to contribute to the ongoing implementation of DestinE, ensuring its relevance and effectiveness within the broader ESA DTE ecosystem. This presentation will conclude with recommendations for maximizing the impact of DestinE through enhanced integration with ESA DTE, user-centric innovation, and targeted outreach.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 2

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Validation of HR SWOT Data over Inland Waters, an Opportunity to Assess the Future Performance of S3NG-T Swath Altimetry Missions

Authors: Maxime Vayre, Julien Renou, Dr Gabriel CALASSOU, Marie Chapellier, Nicolas Taburet, François Boy, Roger Fjortoft, Nicolas Picot, Claire Pottier, Noemie Lalau
Affiliations: CLS, CNES, Magellium
The Surface Water and Ocean Topography (SWOT) mission, conducted by CNES and NASA was successfully launched on 16 December 2022. The Ka-band Radar Interferometer (KaRIn) provides unprecedented 2D observations of the sea-surface height and sub-mesoscale structures as well as water surface elevation, water stock estimates and discharge over continental water surfaces. SWOT performances are extremely encouraging for the upcoming Sentinel-3 Next Generation Topography mission. S3NG-T aims to ensure the continuity of the current Sentinel-3 nadir altimeter. This mission represents the future of swath altimetry, and SWOT is an excellent opportunity to assess its potential performance. The SAOOH instrument of S3NG-T has its own specificities compared to KaRIn. Differences in specifications should be considered such as random and systematic errors. Assessing the performance of HR SWOT products estimates and its ability to track water bodies is of major interest for S3NG. SWOT KaRIn products validation is part of the global performance assessment of the HR SWOT products managed by CNES on the French side. In situ networks are first compared to SWOT measurements over lakes and rivers. Data from the French (SCHAPI), Swiss (BAFU) and American (USGS) in situ networks are used to estimate the performance of SWOT HR elevation data. In addition, we use measurements from current nadir altimetry missions (Sentinel-3A/B, Sentinel-6, ICESat-2) to assess the performance over a large number of water bodies. SWOT HR data are compared with the station networks available on the Copernicus Global Land Service. ICESat-2 data completes the analysis based on tens of thousands of lakes. We have also set up an innovative method to level existing gauges with ICESat-2 and therefore increasing the number of in situ references over which KaRIn accuracy can be estimated. Based on these results and understanding the random and systematic noise affecting KaRIn Pixel Cloud data, we will present the simulation of SAOOH data performed over a variety of rivers and lakes. The performance assessment metrics are then applied, and we obtain qualitative indicators on the expected S3NG-T SAOOH performances. In this presentation, after introducing the metrics developed with CNES to validate SWOT KaRIn performances over continental waters, we will present an original method that we developed within the ESA S3NG-MPUA project, to simulate S3NG-T SAOOH data and assess the performance based on KaRIn products and knowledge of the planned S3NG instrumental characteristics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: KaRIn Noise Reduction Using a Convolutional Neural Network for the SWOT 2km and 250m Ocean Product

Authors: Anaelle Treboutte, Gaetan Meis, Marie-Isabelle Pujol, Gerald Dibarboure
Affiliations: CLS, CNES
The recent launch of the new altimetric satellite SWOT (Surface Water and Ocean Topography) was a revolution in oceanography. It can observe ocean dynamics at mesoscale and submesoscale by measuring the Sea Surface Height (SSH) using two KaRIn (Ka-band Radar Interferometer) instruments. It provides two-dimensional measurements of Sea Surface Height (SSH) at high resolution: 2 km and 250 m (also known as Unsmooth product). For each product, SSH field is impacted by a noise that comes from the instrument and is referred to as KaRIn noise. For the 2km product, the KaRIn noise is correlated, lower than expected and does not have a significant impact on the SSH. However, the SSH derivatives quickly amplify the millimeter-scale noise and cannot be usable without denoising. The methods currently used in conventional nadir altimetry must be revised and readapted for these new data. Therefore, a neural network model based on a U-Net architecture was developed, and it was trained and tested with simulated data in the North Atlantic. The U-Net described in Tréboutte et al. (2023) gives satisfying results on real SWOT data except where the waves heights are important (Dibarboure et al., 2024). Validation and improvement of the U-Net are ongoing. The KaRIn noise on the unsmooth product is a random centimeter-scale noise. The SSH field must be denoised in order to exploit finer scales. The U-Net was adapted and retrained for this product. All the denoised SSH are available in the Level-3 product on the Aviso website. Dibarboure, G., Anadon, C., Briol, F., Cadier, E., Chevrier, R., Delepoulle, A., Faugère, Y., Laloue, A., Morrow, R., Picot, N., Prandi, P., Pujol, M.-I., Raynal, M., Treboutte, A., and Ubelmann, C.: Blending 2D topography images from SWOT into the altimeter constellation with the Level-3 multi-mission DUACS system, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-1501, 2024. Tréboutte, A., Carli, E., Ballarotta, M., Carpentier, B., Faugère, Y., Dibarboure, G., 2023. KaRIn Noise Reduction Using a Convolutional Neural Network for the SWOT Ocean Products. Remote Sens. 15, 2183. https://doi.org/10.3390/rs15082183
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Inland Water Monitoring with The Surface Water and Ocean Topography Saellite

Authors: Hind Oubanas, Tamlin Pavelsky
Affiliations: INRAE G-EAU, University Of North Carolina
The Surface Water and Ocean Topography (SWOT) satellite mission represents a significant advancement in hydrological sciences as the first wide-swath satellite designed to investigate surface water within the global water cycle. Utilizing Ka-band radar interferometry, SWOT provides, for the first time, simultaneous, high-resolution maps of water surface elevation and inundation extent across rivers, lakes, reservoirs, and wetlands globally. Over the past decade, the hydrologic remote sensing community has developed new methodologies and scientific frameworks to fully leverage the potential of SWOT data, enhancing our understanding of global water fluxes and fundamentally transforming how we perceive and analyze surface water dynamics. In this presentation, we will highlight what SWOT has added compared to previous satellite missions, in inland water observation, through its different products. we will explore SWOT's performance over rivers and lakes in measuring water surface elevation and extent, slope, and in estimating discharge at the global scale. We will present the latest advances and acheivements using SWOT data from this community efforts lead through multiple SWOT working groups. Several investigations using this new dataset are already uncovering valuable insights into hydrologic processes, with performance exceeding the mission's pre-launch science requirements for certain variables. However, challenges remain, such as the presence of dark water associated with specular reflectance and other sources, misalignment of SWOT pixels to rivers when they were actually collected over other surfaces, and the outward propagation of very strong signals collected directly beneath the satellite, known as nadir ringing. These issues are actively being addressed through ongoing research and algorithm refinement to improve future data releases and ensure the highest possible data quality. By presenting SWOT's capabilities and the collaborative efforts of the scientific community, this presentation aims to illustrate how wide-swath altimetry is expanding our understanding of Earth's changing water systems and revolutionizing the field of hydrology.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Investigating the impact of sea state on SWOT-KaRIn measurements of Significant Wave Height and Sea Surface Height

Authors: Eva Le Merle, Daria Andrievskaia, Adrien Martin, Mahmoud El Hajj, Yannice Faugere
Affiliations: NOVELTIS, CNES
The Ka-band Radar Interferometer (KaRIn) onboard the Surface Water and Ocean Topography (SWOT) mission, delivers groundbreaking high-resolution, two-dimensional ocean topography data, revealing mesoscale to sub-mesoscale processes with unprecedented detail. However, the precision of sea surface height (SSH) measurements is affected by sea state-induced distortions, collectively known as Sea State Bias (SSB). Understanding and mitigating SSB is critical to unlocking the full potential of SWOT and KaRIn for oceanographic research. In addition to topographic measurements, KaRIn is also capable to provide sea state information such as significant wave height (SWH) measurements, which is a key parameter for SSB estimations. This study focuses on the statistical analysis of SWOT nadir and KaRIn significant wave height measurements, as influenced by cross-track distance and sea state characteristics. By comparing SWH data from SWOT (KaRIn and nadir) with model outputs, CFOSAT observations, and wave buoy records, we quantify measurement differences and identify the driving conditions behind these variations. Initial comparisons between SWOT data (KaRIn and nadir) and ERA5 model outputs reveal several findings. First, there is strong agreement between SWOT nadir and ERA5 data. However, comparisons between KaRIn-derived significant wave height (SWH) and those from ERA5 and nadir indicate that KaRIn SWH measurements tend to exhibit a systematic high bias on average. Interestingly, the largest discrepancies do not align with the most extreme sea states (highest SWH). Further analyses reveal that the spread and positive bias of KaRIn SWH measurements increase with cross-track distance, with the standard deviation rising from 0.2 m near nadir to 0.5 m at the far nadir. Additionally, differences between KaRIn and SWOT nadir SWH were examined as a function of proximity to the coast. While no clear trends emerged, it appears that the most significant differences generally occur closer to the coast. Lastly, we assessed the impact of these discrepancies on sea surface height (SSH) accuracy in relation to wave parameters such as wavelength and direction in order to revisit SSB models. Beyond its immediate implications for SSB correction, this work paves the way for leveraging SWOT’s high-resolution SWH 2D maps to explore near-coastal wave dynamics; a domain currently underexplored due to observational limitations. By validating SWOT’s wave products, this study addresses a key barrier to improving SSH accuracy and our understanding of high-resolution wave dynamics, ensuring that SWOT data can robustly support global and coastal oceanography.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: SWOT over the ice-covered polar oceans: first results

Authors: Sahra Kacimi, Sermsak Jaruwatanadilok, Ron Kwok
Affiliations: Jet Propulsion Laboratory, Applied Physics Laboratory, Polar Science Center, University of Washington
With the launch of the Surface Water and Ocean Topography (SWOT) mission in December 2022, the global open oceans are now mapped for the very first time in 3-D. This represents a significant advancement when compared to traditional profiling altimetry. The SWOT payload includes a wide-swath interferometer (KaRIn), that provides measurements of fine-scale sea surface height (<1km) with centimetric level accuracy. This unique sampling configuration provides unprecedented observations of the sea ice covers in the Arctic and Southern oceans. With an inclination of 77.6°, the wide-swath mapping capability of SWOT provides full-coverage of the entire Antarctic ice cover and a large fraction of the Arctic marginal ice zone in only 10 days. In this presentation, we provide a first examination of SWOT observations over the polar ice-covered oceans from the 1-day (cal/val) and 21-day (science) repeat orbits. The cal/val orbit provides daily revisits of the same region from March 29 to July 10, 2023. This dataset is extremely useful for identifying the formation of open-water leads within the ice covers. Indeed, quasi-specular leads appear bright (high backscatter) and are associated with lower heights (relative to the surrounding ice). The correct identification of water returns from leads is crucial for the subsequent determination of sea surface height and the calculation of sea ice freeboard and sea ice thickness. Using near-coincident observations from ICESat-2, we first assess our ice-water discrimination scheme developed for SWOT observations. Results show a good agreement in the location of leads across the two instruments. We further compare SWOT and ICESat-2 sea ice freeboard and sea surface height estimates over the cal/val and science orbit periods. Finally, we showcase the potential use of this dataset in supporting our understanding of some of the key physical processes that drive the variability of the fast-changing sea ice cover of the polar oceans.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L3)

Presentation: Improvements in SWOT HR water classification and area estimation

Authors: Dr Roger Fjørtoft, Dr Claire Pottier, Dr Nicolas Gasnier, Dr Damien Desroches, Lucie Labat-Allée, Dr Mathilde de Fleury, Dr Manon Delhoume
Affiliations: Centre National d'Etudes Spatiales, CS Group
Water classification and estimation of waterbody area are key steps in the operational processing of KaRIn HR images from the SWOT altimetry mission [1, 2]. SWOT was launched in December 2022, and since end of March 2023 provides geolocated water surface elevation and extent for continental water surfaces, globally and repeatedly, with two or more observations per 21-day orbit cycle. A first global performance assessment was presented at the SWOT Science Validation Meeting in June 2024, achieving accuracies meeting or being close to the Science Requirements [3]. Since then, the algorithms and auxiliary data used for operational processing have been further improved. We here present these evolutions, and their impact on performance, focusing mainly on the estimation of lake area. Some options for future improvements are also addressed. The algorithm selected for operational water detection in SWOT HR images is a binary Bayesian classifier, with Markov Random Field (MRF) regularization and iterative estimation of water and land backscattering characteristics [4, 5, 6]. It processes thousands of images every day, with quite satisfactory results, and only minor algorithm modifications and parameter adjustments have been made so far. The basic underlying hypothesis is that water is much brighter than the surrounding land surfaces [7], which is mostly, but not always, true for such Ka-band images at near-nadir incidences (~1°-4°). An important exception is so-called “dark water”, which occurs when there is no wind at the water surface, nor swirl, so that it acts like a mirror and practically no signal gets back to the radar [8]. By comparing the detected parts of a waterbody with a prior water probability map (based mainly on the Global Water Surface Occurrence mask [9]), the missing “dark water” parts can be flagged [6, 10] and accounted for in the subsequent area estimates [11, 12]. This mechanism was present in the processing chain from the very beginning of the mission, and contributing to the area performances presented at the 2024 Validation Meeting. However, since then, JPL members of the SWOT Algorithm Development Team have improved the projection of the thresholded prior water probability map into radar geometry, using phase information from the SWOT HR data rather than just prior water surface elevations [6]. Another challenge is so-called “bright land”, that can be erroneously detected as water. This typically occurs for urban areas and other man-made structures, but can also come from topographic layover and humid soil. A prior bright land mask (based mainly on the World Settlement Footprint [13] and ESA WorldCover [14] masks) has been used to flag such pixels. Since the Validation Meeting, the prior bright land mask has been updated, and its projection into radar geometry reworked. As the SWOT HR reception window includes nadir, and despite the attenuation by the antenna pattern, a phenomena referred to as “specular ringing” may occur, typically when there is a waterbody at nadir, generating a characteristic bright stripe into the first part of the nominal swath (10-60 km), thereby causing false water detection, and subsequently degraded river widths and lake extents. Improved flagging of this phenomenon [6], and better use of the flag in downstream processing to generate vector products for rivers and lakes [11, 12], have been implemented in recent algorithm updates. Other recent improvements with impact on area performances include updates to the SWOT River Database (SWORD) [15], version 17, and the Prior Lake Database (PLD) [16, 17], version 2.00. Using the same approach as for the SWOT Science Validation Meeting, relying on reference water masks derived from high-resolution optical and radar images, segmented into river reaches and lakes based on the above-mentioned prior databases [15, 16, 17], the positive effect of these improvements on water classification and area estimation performance will be presented, focusing on lakes. Potential long-term improvements for SWOT, or in the perspective of the S3-NG mission, will also be addressed, including a water detection algorithm that relies more heavily on prior data to detect and estimate the extent of narrow rivers [18, 19]. References: [1] L.-L. Fu, D. Alsdorf, R. Morrow, E. Rodriguez, and N. Mognard, “SWOT: The surface water and ocean topography mission: Wide-swath altimetric elevation on Earth,” Jet Propulsion Laboratory, Nat. Aeronautics Space Administ., Washington, D.C., USA, JPL Publication 12-05, 2012. [2] M. Durand, L. Fu, D. P. Lettenmaier, D. E. Alsdorf, E. Rodriguez, and D. Esteban-Fernandez, “The surface water and ocean topography mission: Observing terrestrial surface water and oceanic submesoscale eddies,” Proc. IEEE, vol. 98, no. 5, pp. 766–779, May 2010. [3] Jet Propulsion Laboratory, “Surface water and ocean topography mission (SWOT): Science requirements document,” JPL D-61923, Rev. B, SWOT NASA/JPL Project, Pasadena, CA, 2018. [4] S. Lobry, "Markovian models for SAR images: Application to water detection in SWOT satellite images and multi-temporal analysis of urban areas," Télécom ParisTech, Paris, France, 2017. [5] S. Lobry, L. Denis, B. Williams, R. Fjørtoft, and F. Tupin, "Water Detection in SWOT HR Images Based on Multiple Markov Random Fields," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019 [6] Jet Propulsion Laboratory, "Algorithm Theoretical Basis Document: L2_HR_PIXC Level 2 Processing," JPL D-105504, Pasadena, CA, 2024. [7] R. Fjørtoft et al., “KaRIn on SWOT: Characteristics of near-nadir Ka-band interferometric SAR imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 4, pp. 2172–2185, Apr. 2014. [8] Jet Propulsion Laboratory, "SWOT Science Data Products User Handbook," JPL D-109532, Pasadena, CA, 2024. [9] J.-F. Pekel, A. Cottam, N. Gorelick and A. S. Belward, "High-resolution mapping of global surface water and its long-term changes," Nature, no. 540, pp. 418-422, 2016. [10] Jet Propulsion Laboratory, "SWOT Level 2 KaRIn high rate water mask pixel cloud product (L2_HR_PIXC)," JPL D-56411, Pasadena, CA, 2024. [11] Jet Propulsion Laboratory, “SWOT Level 2 KaRIn high rate river single pass vector science data product”, JPL D-56413, Pasadena, CA, 2024. [12] Centre National d’Etudes Spatiales, “SWOT Level 2 KaRIn high rate lake single pass vector science data product”, SWOT-TN-CDM-0674-CNES, Toulouse, France, 2024. [13] "World Settlement Footprint," 2019. [Online]. Available: https://geoservice.dlr.de/web/maps/eoc:wsf2019. [14] "ESA WorldCover," 2020. [Online]. Available: https://esa-worldcover.org/en [15] E. H. Altenau, T. M. Pavelsky, M. T. Durand, X. Yang, R. P. d. M. Frasson and L. Bendezu, "The Surface Water and Ocean Topography (SWOT) mission River Database (SWORD): A global river network for satellite data products," Water Resources Research, vol. WRCS25408, 2021. [16] Centre National d’Etudes Spatiales, “SWOT Prior Lake Database”, SWOT-IS-CDM-1944-CNES, Toulouse, France 2024. [17] J. Wang, C. Pottier, C. Cazals, M. Battude, Y. Sheng, C. Song, M. S. Sikder, X. Yang, L. Ke, M. Gosset, R. Reis, A. Oliveira, M. Grippa, F. Girard, G. H. Allen, S. Biancamaria, L. C. Smith, J.-F. Créteaux and T. M. Pavelsky, "The Surface Water and Ocean Topography Mission (SWOT) Prior Lake Database (PLD): Lake mask and operational auxiliaries," Water Resources Research, 2024 (submitted). [18] N. Gasnier, L. Denis, R. Fjørtoft, F. Liège and F. Tupin, "Narrow River Extraction From SAR Images Using Exogenous Information," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 5720-5734, 2021. [19] N. Gasnier, R. Fjørtoft, B. Williams, D. Desroches, L. Labat-Allée, J. Maxant, “Early Results on Water Detection in SWOT HR Images”, in Proc. IGARSS, Athens, Greece, 2024.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Session: F.01.03 Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations - PART 1

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Trends in Earth Observation Education and Capacity Building: Embracing Collaboration and Innovation

Authors: PhD. Terefe Hanchiso Sodango, Prof. Effiom Oku, PhD Fabiola D. Yépez Rincón, PhD Rishiraj Dutta, PhD Mark Higgins, PhD Ganiy Agbaje, PhD Jean Danumah, PhD William Straka III, Álvaro Germán Soldano, PhD CM Bhatt, PhD Luca Brocca, Martyna A. Stelmaszczuk-Górska, Erin Martin, Yakov M. Moz, PhD Altay Özaygen, Dr. Nesrin Salepci
Affiliations: Wolkite University, University of Abuja, Universidad Autónoma de Nuevo León, Asian Disaster Preparedness Center (ADPC), European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), National Space Research and Development Agency, African Regional Centre for Space Science and Technology Education (ARCSSTE-E), Université Félix Houphouët Boigny, University of Wisconsin–Madison, National Commission on Space Activities (CONAE), Indian Institute for Remote Sensing (IIRS/ISRO), National Research Council of Italy, Research Institute for Geo-Hydrological Protection, Friedrich Schiller University Jena, Erin Martin Consulting, Booz Allen Hamilton, Metis Analytica, Université Paris Saclay, Institut Mines-Télécom Business School, LITEM
The accelerating reality of climate change and the increasing disaster risks observed on our living planet underscore the urgent need to connect users with Earth Observation (EO) technologies, education, and capacity-building efforts. These actions are critical to enabling meaningful impact, ensuring no community is left behind, and contributing to sustainable development. This challenge demands new, inclusive, and collaborative approaches that transcend traditional methods. The Earth Observation Training, Education, and Capacity Development Network (EOTEC DevNet) is addressing this challenge by fostering global and regional collaboration, sharing knowledge, and aligning EO resources with regional and thematic priorities. EOTEC DevNet operates as a virtual network of networks through Regional Communities of Practice (CoPs) in Africa, the Americas, Asia-Oceania, and Europe. These CoPs unite educators, policymakers, researchers, and data users to share experiences, identify regional needs, and develop strategies for leveraging EO to solve societal challenges. By linking regional priorities with global frameworks such as the 2030 Agenda for Sustainable Development, the Sendai Framework for Disaster Risk Reduction, and the Paris Agreement, EOTEC DevNet strengthens EO education and capacity-building efforts worldwide. Practical Tools and Methods EOTEC DevNet helps users access and use tools and methods tailored to specific regional challenges. For instance, the Flood Tools Tracker and Drought Tools Matrix simplify navigation through existing EO resources, making it easier for decision-makers and practitioners to identify the tools and training materials they need. Interactive dashboards and digital platforms further enhance accessibility and usability, enabling stakeholders to integrate valuable data and information into their operations. Sharing Success Stories EOTEC DevNet gathers and shares case studies and guidance documents that highlight successful EO capacity-building efforts. Among these is the Global Flood Extent Use Case, which includes five regional event analyses that demonstrate the practical application of EO data in addressing flood risks and offers a replicable model for others to enhance disaster preparedness and response. Another notable effort is the ongoing development of Needs Assessment Guidance, designed to help stakeholders systematically identify and address gaps in EO capacity building. These efforts are practical examples that others can adapt and apply in their own regions. Promoting Engagement and Knowledge Sharing EOTEC DevNet’s strength lies in its ability to bring people together. Through regular online meetings, spotlight presentations, task teams, webinars, and thematic discussions, the network creates opportunities for sharing ideas and developing solutions. Spotlight presentations feature top practitioners and researchers who share their experience on the latest trends, tools, and strategies for addressing disasters. These presentations are particularly useful in helping members stay informed, resolve queries, interact with experts, and identify practical approaches for their own work.The result is a network of collaborators who learn from each other, explore new trends, and strategize on solutions to regional and global challenges. Since its establishment, EOTEC DevNet has organized over 105 events, engaging more than 1,300 participants as of November 2024. These events include 43 Task Team meetings across the four regions (Africa, Americas, Asia-Oceania, and Europe), as well as 43 meetings of the Flood Working Groups and a global webinar on flood disaster risk management. The network has also supported five events for the Drought Working Group, consisting of global consultation meetings, a kick-off event, and webinars. Additionally, 13 Global Task Team meetings and a webinar on the Copernicus Data Space Ecosystem were held to foster broader collaboration. Every meeting features at least one Spotlight presentation or multiple presentations, showcasing the latest tools, strategies, and success stories shared by experienced practitioners. These Spotlights serve as a cornerstone for knowledge exchange and engagement, helping participants stay informed and interact with experts. Lessons from Collaboration EOTEC DevNet uses tools such as social network analysis to understand how stakeholders connect, identify gaps in capacity-building efforts, and improve knowledge flows. This ensures that collaborations are effective and that resources and training efforts align with the diverse needs of communities. Supporting Global Goals EOTEC DevNet’s work aligns closely with global frameworks for sustainable development, disaster risk reduction, and climate action. By bridging gaps in EO knowledge and resource availability across regions, the network contributes to achieving the Sustainable Development Goals (SDGs) and enhancing climate resilience. Its collaborative model brings together global, regional, and local stakeholders, minimizing duplication and fostering partnerships, maximizing the value of EO resources. Looking Ahead As EO technologies continue to evolve, EOTEC DevNet is committed to expanding its impact by scaling up knowledge-sharing initiatives and embracing digital innovations. The network will continue to address barriers to resource access, ensuring that communities worldwide benefit from EO tools and training. By fostering collaboration and innovation, EOTEC DevNet empowers communities to tackle the complex challenges of climate change, disasters, and sustainability. This contribution will share insights into how EOTEC DevNet operates, emphasizing its role as an innovative hub for collaboration and knowledge sharing. It will demonstrate the value of community-based approaches in driving change, with practical examples showcasing how the network fosters capacity building through open innovation and promotes global and regional cooperation.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Online and in-person learning for decision making: The NASA Applied Remote Sensing Training (ARSET) Program

Authors: Brock Blevins, Melanie Follette-Cook, Suzanne
Affiliations: NASA Applied Remote Sensing Training Program (ARSET)
The NASA Applied Remote Sensing Training (ARSET) Program, part of the NASA Earth Action Capacity Building Program, has sought to close the gap between the availability of remote sensing data and the use of Earth Observations for informed decision making through providing cost-free training targeted to working professionals. Since 2009, ARSET has trained more than 140,000 participants from 186 countries in over 19,000 unique organizations across seven Earth Science themes, health and air quality, climate and resilience, agriculture, water resources, disasters, ecological conservation, and wildland fires. Trainings are designed with a learner-centered, goal-driven approach that employs a combination of teaching strategies, including: lecture, demonstrations, in-class and independent exercises, and case-study analysis. A post-training homework provides participants with opportunities for learning reinforcement and self-assessment of mastery of learning goals, and provides ARSET training team with insights on impact and to inform continuous improvement. ARSET trainings are offered in three formats, virtual live instructor-led, virtual asynchronous self-paced, and in-person and at a range of levels according to technical skill, from introductory to advanced. In this presentation, we will discuss the challenges and opportunities of ARSET training modalities, and best practices and approach to training design, delivery, and impact evaluation.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Expanding the Access to Hyperspectral Remote Sensing: Open Science and Education Initiatives by the EnMAP Science Segment

Authors: Theodora Angelopoulou, Arlena Brosinsky, Akpona Okujeni, Saskia Foerster, Katrin Koch, Daniel Scheffler, Kathrin Ward, Robert Milewski, Karl Segl, Saeid Asadzadeh, Alexander Khokanovsky, Tobias Hank, Stefanie Steinhauser, Astrid Bracher, Marianna Soppa, Najoro Randrianalisoa, Benjamin Jakimow, Andreas Janz, Michael Bock, Nicole Pinnel, Vera Krieger, Sabine Chabrillat
Affiliations: German Research Centre for Geosciences (GFZ), Helmholtz Centre, Humboldt-Universität zu Berlin (HU), German Environment Agency (UBA), German Weather Service (DWD), Ludwig-Maximilians-Universität München (LMU), Alfred-Wegener-Institute Helmholtz Centre for Polar and Marine Research (AWI), Institute of Environmental Physics, University Bremen, German Aerospace Center (DLR), Earth Observation Center (EOC), German Aerospace Center (DLR), German Space Agency, Leibniz University Hannover, Institute of Earth System
Hyperspectral remote sensing offers novel opportunities to develop innovative products and services within the framework of the European Copernicus programme, addressing global environmental challenges and supporting policy implementation. Recent years have witnessed rapid advancements, driven by the launch of scientific hyperspectral satellite missions such as EnMAP, PRISMA and DESIS, paving the way for ESA’s upcoming flagship mission CHIME. Despite significant advances within the scientific community, widespread use of hyperspectral remote sensing remains limited due to challenges such as technical complexity, restricted access to tools and data, and a lack of tailored educational resources. Addressing these challenges requires dedicated efforts in education and user engagement to bridge the gap between scientific innovation and practical applications, enabling broader use by non-experts. The EnMAP Science Segment has embraced Open Science principles to make hyperspectral knowledge and tools more accessible. As part of the EnMAP mission, a comprehensive scientific programme fostering Open Science activities has been established, coordinated by the German Research Centre for Geosciences (GFZ), supported by the German Space Agency at the German Aerospace Center (DLR), and partnered with leading institutions including the Ludwig-Maximilians-Universität München (LMU), the Alfred-Wegener-Institute Helmholtz Centre for Polar and Marine Research (AWI), and Humboldt-Universität zu Berlin (HU). This programme encompasses the development and provision of algorithms and applications, the free and open-source EnMAP-Box software, benchmark datasets, and the HYPERedu training initiative. HYPERedu plays a pivotal role in both, bringing scientific developments into education, and preparing users for the effective uptake of hyperspectral (EnMAP) data in research. As well as addressing the needs of public authorities, and the potential industry players developing commercial applications based upon hyperspectral EnMAP data. HYPERedu addresses graduate students at master level and professionals in academia, industry, and governmental institutions offering a variety of freely accessible learning resources. These include Massive Open Online Courses (MOOCs), annotated slide collections, hands-on tutorials built on the EnMAP-Box software, educational films and screencasts as well as interactive graphics. All materials are freely available under a CC-BY license and hosted on the EO-College platform, facilitating their integration into university curricula, professional training programmes, and self-paced learning. We present an overview of the ongoing efforts by the EnMAP science community to translate science knowledge into educational tools through HYPERedu, with MOOCs being the most developed educational initiatives. These MOOCs are designed for flexible, self-paced learning, combining fundamental knowledge with hands-on exercises using the EnMAP-Box, and participants earning a certificate upon completion. The first MOOC, "Beyond the Visible: Introduction to Hyperspectral Remote Sensing," launched in November 2021, covers the fundamentals of imaging spectroscopy. Subsequent MOOCs have addressed agricultural applications (2022), EnMAP data access (2023), and soil applications (2024). Further applied topics such as forestry, geology, and coastal waters expanding the resources will become accessible in the near future. HYPERedu, with its MOOCs, are well-received and currently spearheading education in hyperspectral remote sensing, as well as further enhancements are considered to stay aligned with evolving educational trends. Through this contribution, we further aim to actively gather insights and foster discussions on future directions on how to expand content and integrate innovative learning technologies. Bridging scientific development with education offers a valuable opportunity to ensure broader access and optimize the use of modern Earth Observation technologies. In this context, the EnMAP Science Segment , through its HYPERedu initiative, underscores the importance of education in facilitating the widespread adoption of hyperspectral remote sensing, making this discipline accessible to a diverse user community.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Advancing Earth Observation Literacy: A Strategic Approach to Skills Development in the Downstream Space Sector

Authors: Martyna A. Stelmaszczuk-Górska, Assoc. Prof. Dr Angela Aragon-Angel, Eva-Maria Steinbacher, Gabriella Povero, Bärbel Deisting, Danny Vandenbroucke, Dr. Carsten Pathe, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich Schiller University Jena, Technical University of Catalonia, Paris Lodron Universität Salzburg, LINKS Foundation, bavAIRia e.V., Katholieke Universiteit Leuven, Earth Observation Services GmbH
The rapid growth of the downstream space sector and its expanding applications in industries such as agriculture, urban planning, public health, and disaster management highlight an urgent need to address the workforce skills gap. To fully unlock the potential of Earth Observation (EO) technologies, Satellite Communications (SatCom), and Global Navigation Satellite Systems (GNSS), innovative strategies are essential to foster downstream space literacy. A key distinguishing feature of SpaceSUITE is its deliberate shift from treating these subdomains in isolation to adopting an integrated approach, leveraging combined expertise to address overarching challenges. The SpaceSUITE project—SPACE downstream Skills development and User uptake through Innovative curricula in Training and Education—provides a comprehensive framework for building these skills. By combining data-driven methodologies and collaborative approaches, the project aims to meet the evolving demands of the workforce and strengthen the downstream space sector. A central pillar of SpaceSUITE’s strategy for pursuing an in-demand, adaptive workforce is the Skills Intelligence Mechanism, a data-driven approach that analyzes labor market trends and evaluates educational offerings. This mechanism identifies skills gaps, supports the design of reskilling and upskilling programs, and ensures alignment with industry trends. By bridging connections between educational providers, industry leaders, and policymakers, the mechanism promotes sustainable and inclusive workforce development, equipping professionals to adapt to the challenges and opportunities of the downstream space sector. In addition, SpaceSUITE employs a persona-driven methodology to create targeted training programs tailored to specific professional needs. Personas, developed through detailed market analysis and stakeholder input, represent key user profiles such as data analysts, sustainability experts, and technical operators. These personas guide the design of curricula and training materials to address both technical and transversal skills gaps, ensuring relevance and impact across diverse professional contexts A business strategy is being developed to enable the mainstreaming and scaling of the training and educational environments created during the project, ensuring their long-term sustainability. Training materials will be integrated into a digital repository, providing seamless access to curated learning content and practical applications. The SpaceSUITE digital platform will support continuous learning by helping professionals navigate available training opportunities and engage with EO, SatCom, and GNSS concepts, applying them effectively in their fields. The results of SpaceSUITE’s initiatives demonstrate the value of integrating EO, SatCom, and GNSS literacy into diverse professional settings. Reactive training packages have already been deployed, covering topics such as the basics of remote sensing with hands-on disaster applications, image processing and machine learning for building management, an introduction to satellite communications, and GNSS data analysis, including quality assessment and massive data processing. These programs have equipped participants with the tools and knowledge needed to address immediate challenges effectively. Looking ahead, proactive activities will focus on scaling these efforts by incorporating emerging technologies and refining methodologies to keep pace with the rapidly evolving landscape of the downstream space sector. Furthermore, new skills for the digital and green transitions of Europe’s economy will be promoted through lifelong learning opportunities facilitated by the SpaceSUITE digital platform. This platform will help share best practices and monitor both available and needed skills to support ongoing workforce development. This contribution will showcase how SpaceSUITE integrates innovative methodologies, tools, and collaborative strategies to enhance EO, SatCom, and GNSS literacy and capacity building. By emphasizing open innovation, cooperation, and cross-sectoral synergies, SpaceSUITE demonstrates how targeted education and vocational training efforts can advance workforce development, amplify the societal benefits of space technologies, and contribute to a more sustainable and resilient future.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Large Language Models in Digital Education: Assessing Reliability, Efficiency, and Content Quality

Authors: Robert Eckardt, Dr. Carsten Pathe, Dr. Henryk Hodam, Dr. Nesrin Salepci, Dr. Martyna Anna Stelmaszczuk-Górska, Jun.-Prof. Dr. Andreas Rienow, Prof. Dr. Christiane Schmullius
Affiliations: Friedrich Schiller University Jena, Ruhr-Universität Bochum, EOS Jena GmbH, ignite education GmbH
The integration of Large Language Models (LLMs) into digital education is fundamentally transforming the production, personalization, and dissemination of educational content. As education increasingly adopts digital tools, the role of LLMs has become pivotal in reshaping how content is created and tailored to diverse audiences. This study critically examines the potential of LLMs within the Earth Observation (EO) educational content production lifecycle, focusing on their influence on reliability, operational efficiency, content quality, and learner engagement. By embedding LLMs throughout various stages - ranging from conceptualization and scripting to post-production and the development of interactive modules - this research evaluates both, the opportunities and inherent challenges associated with deploying LLMs in EO education. The use of LLMs presents transformative possibilities in the way EO educational content is conceptualized and developed. During the initial phases of content creation, LLMs can support the rapid generation of ideas and provide initial drafts for educational materials, thereby accelerating the overall production cycle. By leveraging vast databases of information, LLMs can suggest relevant topics, highlight critical areas of interest, and assist in drafting outlines that align with pedagogical objectives. This preliminary support helps educators and content creators streamline their workflows, saving time and ensuring that the content produced is grounded in the latest scientific findings and educational best practices. Furthermore, LLMs are capable of generating diverse content variations, allowing educators to choose from multiple approaches and select the one that best fits the learning objectives and target audience. Findings from controlled experiments demonstrate that LLMs substantially enhance efficiency by automating routine processes, such as script generation, voiceover production, and visual content creation. These automated processes not only save time but also introduce a level of consistency that can be challenging to achieve manually, particularly when content is produced at scale. Such automation enables educators to concentrate on high-value tasks, including creative storytelling, contextual adaptation, and ensuring scientific accuracy. The creative aspects of educational content production, such as developing narratives that resonate with learners or contextualizing information to make it more relatable, benefit greatly from human involvement. Educators are thus able to focus their expertise on enriching the content, ensuring that it meets the cognitive and emotional needs of the learners, rather than being bogged down by repetitive tasks. However, sustained human oversight remains imperative to safeguard the quality and precision of specialized content, particularly in areas where domain-specific expertise is indispensable. While LLMs provide significant efficiencies, they are not without limitations. The nuances involved in interpreting EO data, particularly when conveying complex geospatial and environmental concepts, require a depth of understanding that current AI models may not fully possess. Human experts are crucial in verifying the accuracy of AI-generated content, ensuring that it adheres to educational standards and effectively communicates intricate concepts. This is particularly important in specialized domains like EO, where inaccuracies can lead to misconceptions and undermine the educational value of the content. Therefore, a hybrid approach that combines the scalability of LLMs with the precision of human expertise is essential for producing high-quality educational materials. Moreover, the incorporation of LLMs significantly augments translation capabilities, facilitating the creation of multilingual EO content and thereby expanding the global accessibility of these resources. The ability of LLMs to translate content into multiple languages with contextual accuracy is a major advantage for EO education, which often targets a diverse, international audience. By reducing language barriers, LLMs contribute to the democratization of knowledge, allowing learners from different linguistic backgrounds to access high-quality educational materials. This capability is particularly beneficial for regions where access to EO education is limited due to language constraints, thereby promoting inclusivity and broadening the impact of EO training programs. Additionally, LLMs can be used to localize content, adapting not only the language but also cultural references and examples to better resonate with specific audiences, thus enhancing learner engagement and comprehension. Despite these benefits, challenges such as the risk of inaccuracies in AI-generated content and the ethical considerations regarding data privacy underscore the necessity for a balanced approach to AI integration. LLMs, by virtue of being trained on large datasets, may inadvertently produce content that includes outdated or incorrect information, especially in a rapidly evolving field like EO. This necessitates rigorous validation by subject matter experts to ensure the accuracy and reliability of the educational content. Furthermore, ethical issues related to data privacy, bias in training data, and the potential for misuse of AI-generated materials must be carefully managed. Transparency in the use of AI tools, along with clear guidelines on data handling and content validation, is crucial to maintaining trust in AI-assisted educational processes. These challenges highlight the importance of developing robust frameworks for AI integration that prioritize both technological innovation and ethical responsibility. The integration of LLMs into EO education also holds potential for enhancing personalized learning experiences. By analyzing learner data, LLMs can adapt content to individual learning styles, providing tailored feedback and customized learning pathways. This adaptability makes learning more responsive to the unique needs of each student, fostering a more engaging and effective educational experience. For instance, LLMs can generate adaptive quizzes that adjust in difficulty based on learner performance, or provide supplementary materials that cater to areas where a learner may be struggling. Such personalized interventions help ensure that all learners, regardless of their starting point, can progress at their own pace and receive the support they need to master complex EO topics. Additionally, LLMs can facilitate the development of interactive learning modules, incorporating elements such as simulations and scenario-based activities that are particularly well-suited to EO education. Interactive modules allow learners to explore EO data in a hands-on manner, fostering deeper understanding through practical application. For example, LLMs can help generate interactive exercises where learners manipulate satellite imagery to observe environmental changes over time or analyze geospatial data to draw conclusions about climate patterns. These kinds of active learning opportunities not only enhance engagement but also help learners develop critical skills in data analysis and interpretation, which are crucial for understanding EO concepts. Furthermore, LLMs can be instrumental in supporting collaborative learning environments. By integrating with digital platforms, LLMs can facilitate discussion forums, group projects, and peer-to-peer interactions. They can summarize group discussions, suggest relevant resources, or even moderate debates by providing factual clarifications. This capability enhances the collaborative dimension of learning, allowing students to benefit from diverse perspectives while ensuring that discussions remain focused and informative. In EO education, where interdisciplinary understanding is often required, the ability of LLMs to provide contextually relevant information in real-time can significantly enhance the learning experience. Another significant advantage of LLMs is their ability to streamline the iterative refinement of educational content. Given the dynamic nature of EO, where new research and data are continually emerging, educational materials must be updated regularly to remain relevant. LLMs can assist in this process by quickly analyzing new research findings and integrating them into existing content. This ensures that learners always have access to the most current information, without placing an excessive burden on educators to manually revise materials. The iterative updating facilitated by LLMs not only keeps the content fresh but also allows educators to respond swiftly to changes in the field, maintaining the quality and relevance of EO education. Ultimately, this study presents elements of a comprehensive framework for leveraging LLMs to optimize the EO educational content lifecycle, emphasizing that AI, when judiciously integrated with human expertise, can enhance the efficiency, scalability, and inclusivity of digital education. By automating routine aspects of content creation, LLMs free educators to focus on the more nuanced and creative components of teaching, which are critical for fostering deep learning and engagement. This synergy between AI and human educators not only improves the quality and reach of EO education but also ensures that learning experiences are adaptable, culturally sensitive, and aligned with the needs of diverse learner populations. The potential for LLMs to personalize learning pathways, provide real-time feedback, and adaptively respond to learner inputs further underscores their role as a transformative tool in the evolving landscape of digital education. As educational institutions continue to explore the integration of AI, this study provides valuable insights into how LLMs can be effectively utilized to advance educational practices while maintaining the core values of human-centered learning. The findings of this study also have broader implications for the future of digital education beyond EO. As LLMs continue to evolve, their application are extending to various other domains that require specialized knowledge and adaptability. The principles outlined in this research - such as the importance of human oversight, the need for ethical AI practices, and the benefits of personalization -are applicable across a wide range of educational contexts. By understanding how to effectively integrate LLMs into EO education, educators and policymakers can develop best practices that can be adapted to other fields, thus contributing to the broader transformation of education in the digital age. The scalability and flexibility offered by LLMs provide an opportunity to rethink traditional educational models, making them more inclusive, engaging, and responsive to the needs of learners.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall M1/M2)

Presentation: Expanding Access to EO Education: The Impact of IEEE GRSS Webinars on Global Learning

Authors: Gunjan Joshi, Stephanie Tumampos, Fairouz Stambouli, Keely Roth
Affiliations: Helmholtz-Zentrum Dresden-Rossendorf, Technical University of Munich, German Aerospace Center, Planet Labs, PBC
The IEEE Geoscience and Remote Sensing Society (GRSS) webinars have become a cornerstone in disseminating remote sensing education globally, emerging as a pivotal resource during the COVID-19 pandemic. Since their inception in 2020, these webinars have featured contributions from over 120 experts spanning 25 countries and have reached a global audience of enthusiasts, students, early researchers, and professionals. These webinars are made possible through the collaborative efforts of IEEE GRSS's eight technical committees and various other initiatives, which bring together diverse experts to deliver high-quality state-of-the-art research content. In the past year alone, the GRSS webinars have attracted over 6,500 registrations from 137 countries across all 7 continents, with over 4,000 participants joining the live sessions. The webinars have been met with overwhelmingly positive reception, as evidenced by consistently high ratings and enthusiastic feedback from attendees. In addition, a significant majority of participants expressed willingness to recommend the webinars to others and indicated strong interest in attending future sessions, underscoring the program’s impact and value to the community. These webinars, free for both members and non-members, foster open and inclusive access to knowledge and discussion. The webinars are also posted on YouTube, where they have garnered wide viewership and continue to serve as invaluable resources for asynchronous learning. These webinars have not only focused on fundamental topics but have also embraced a diverse range of emerging subjects, including Earth Observation (EO) data platforms, geospatial foundation models, digital twins, unmanned aerial vehicles (UAVs), hyperspectral remote sensing technologies, geospatial artificial intelligence, climate and environment, and emerging satellite missions. Additionally, they offer capacity and skill-building sessions on academic writing, publishing as well as career development. We highlight the role of IEEE GRSS webinars in leveraging innovative digital tools and global collaboration to democratize EO education. It also explores audience metrics and thematic trends to understand how these initiatives contribute to capacity building in today’s evolving digital education landscape. The webinars strive to make knowledge universally available, ensuring that anyone with internet access and a curiosity about our planet can easily access information. Looking ahead, IEEE GRSS webinars aim to improve accessibility and interactivity further, adapt to the changing needs of the EO and geospatial community, and promote a truly inclusive and borderless knowledge sharing community.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Session: A.01.03 Fourier Transform Spectroscopy for Atmospheric Measurements

Fourier Transform Spectroscopy (FTS) is a powerful technique for atmospheric observations allowing the Earth' and atmosphere's thermal radiation to be sampled with high spectral resolution. This spectral range carries profile information of many atmospheric gases (water vapour, carbon dioxide, nitrous oxide, methane, ammonia, nitric acid,...), but also information on cloud (e.g. phase or liquid/ice water path) and aerosol (e.g. dust optical depth). Measurements have been performed from satellite (nadir and limb), from ground, or with airborne platforms for several decades and have recently come into the foreground in ESA's Earth Explorer (EE) programme with the EE9 FORUM mission and the EE11 candidate CAIRT, both aiming to fly in convoy with the FTS IASI-NG on MetOp-SG. The Infrared Sounder (IRS) will be launched on MTG-S1 in 2025. In addition, new airborne and ground-based instruments became available with performance and versatility that allow for innovative research applications. This session invites presentations on:
- retrieval algorithms and methods for uncertainty quantification including calibration/validation techniques for existing and future missions,
- new spectrometer developments for field work and satellite applications.

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: Experimental validation of HiSRAMS and REWARDS all-sky airborne measurements in synergy with active remote sensors and in-situ probes

Authors: Natalia Bliankinshtein, Lei Liu, Philip Gabriel, Cuong Nguyen, Keyvan Ranjbar, Yi Huang, Kenny Bala, Leonid Nichman, Mengistu Wolde, Dirk Schuettemeyer
Affiliations: National Research Council Canada, McGill University, Horizon Science and Technology, European Space Agency
Microwave sounders contribute most information to operational numerical weather prediction models (Saunders, 2021). Following recent advancements in radio frequency (RF) technologies, research organizations around the world are currently undertaking proof of concept studies (e.g., Henry et al., 2023; Pradhan et al., 2024) to explore the benefits of hyperspectral microwave radiometers for atmospheric profiling, which involves building prototype instruments and their suborbital testing. The High Spectral Resolution Airborne Microwave Sounder (HiSRAMS) is a prototype instrument (Auriacombe et al., 2022) developed by AAC Omnisys, National Research Council Canada (NRC) and McGill University under European Space Agency (ESA) funding and operated onboard the NRC Convair-580 research aircraft. HiSRAMS consists of two spectrometers covering absorption bands of oxygen (49.6-58.3 GHz) and water vapor (175.9-184.6 GHz) respectively. It can be configured to measure single-polarized or dual-polarized radiance at up to 305 kHz spectral resolution. HiSRAMS brightness temperature spectra and Optimal Estimation Method retrievals of temperature and humidity profiles were validated in clear-sky conditions in a dedicated flight campaign in 2021-2022 (Bliankinshtein et al., 2023, Liu et al., 2024) In 2023, NRC undertook a follow-up study to extend HiSRAMS validation to cloudy atmospheres. To assist with validation, NRC and ProSensing Inc. implemented a single-channel passive W-band radiometer at 94.05 GHz in one of the channels of the NRC Airborne W-band (NAW) cloud radar, by re-equipping one receiver channel of the NAW with a dedicated radiometer RF frontend and modifying the radar electronics and software. This modification of NAW added a capability to run a radiometer mode using existing antenna ports, which was part of the NAW original switching design of 10 ports. The resulting Radar-Enhanced W-band Airborne Radiometer Detection System (REWARDS) is effectively a noise-diode-calibrated W-band radiometer (‘passive channel’) which can operate in an interleaving mode with the radar (‘active channel’) with very fine temporal and spatial resolution. The information provided by REWARDS mainly derives from cloud particles, complementing that obtained in the G band. Calibrated brightness temperatures of REWARDS show great sensitivity to liquid clouds, as shown by staircase sampling of a warm stratocumulus cloud deck (Bliankinshtein et al, 2024). Results from two warm stratocumulus cloud flights highlight the synergy of REWARDS and HiSRAMS with the NRC Convair-580's advanced instrument suite which includes in-situ atmospheric state sensors, cloud microphysics probes, NAW radar, and a 355 nm Airborne Elastic Cloud Lidar. Particle size distribution from in-situ probes and cloud boundaries from radar and lidar are incorporated in the forward radiative transfer model to simulate microwave observations of HiSRAMS and REWARDS. W-band radiances, as particularly sensitive to cloud information, are used to fine-tune the model setup, which is subsequently applied to HiSRAMS radiation closure tests. Resulting uncertainties reveal the challenge of experimental validation of all-sky radiances in heterogeneous environment. The multi-sensor representations of clouds in all-sky radiative transfer models are shown to be of great importance to achieve radiation closure and analyze its uncertainties. This study highlights the value of hyperspectral radiometry in enhancing our understanding of cloud microphysics and atmospheric processes, with significant implications for future satellite missions. References: Auriacombe, Olivier, et al. "High spectral resolution airborne microwave sounder (HiSRAMS)." 2022 47th International Conference on Infrared, Millimeter and Terahertz Waves (IRMMW-THz). IEEE, 2022. Bliankinshtein, Natalia, et al. "Airborne validation of HiSRAMS atmospheric soundings." IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023. Bliankinshtein, Natalia, et al. "Calibration and Flight Test of NAW W-Band Radiometric Mode." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Henry, Manju, et al. "Development of a hyperspectral microwave sounder for enhanced weather forecasting." IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2023. Liu, Lei, et al. "Radiative closure tests of collocated hyperspectral microwave and infrared radiometers." Atmospheric Measurement Techniques 17.7 (2024): 2219-2233. Pradhan, Omkar, et al. "Hyperspectral Microwave Radiometer for Airborne Atmospheric Sounding." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Saunders, Roger. "The use of satellite data in numerical weather prediction." Weather (00431656) 76.3 (2021).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: First Flight - First Light: the Novel Limb-imaging FTIR Sounder GLORIA-Lite Crossing the Atlantic

Authors: Felix Friedl-Vallon, Erik Kretschmer, Tom Neubert, Jörn Ungermann, Michael Höpfner, Thomas Gulde, Sören Johansson, Anne Kleinert, Guido Maucher, Christof Piesch, Peter Preusse, Markus Retzlaff, Martin Riese, Georg Schardt, Gerald Wetzel, Wolfgang Woiwode
Affiliations: Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology, Central Institute of Engineering, Electronics and Analytics-Electronics Systems (ZEA2), Forschungszentrum Jülich, Institute of Climate and Energy Systems - Stratosphere (ICE4), Forschungszentrum Jülich
A new remote-sensing instrument, GLORIA-Lite, was developed by the Institute of Meteorology and Climate Research (IMK-ASF) at the Karlsruhe Institute of Technology (KIT), in collaboration with the ICE4 and ZEA2 institutes at Forschungszentrum Jülich(FZJ). It was launched within the TRANSAT2024 field campaign on board a large stratospheric balloon by a team of the Centre National d'Études Spatiales (CNES) from the European Space and Sounding Rocket Range (ESRANGE, Swedish Space Corporation), on June 22, 2024. The balloon ascended to an altitude of 40 km, traveling from Kiruna, northern Sweden, to Baffin Island, Canada, where it safely landed on June 26. GLORIA-Lite is an advanced limb-imaging Fourier-Transform Infrared instrument, extending the decades-long legacy of its predecessors, GLORIA (airborne/balloon) and MIPAS (airborne/balloon). By leveraging state-of-the-art infrared detectors, customized electronics, and innovative manufacturing techniques, GLORIA-Lite achieves a significant reduction in size and weight compared to its predecessors. This miniaturization enables its deployment on transcontinental balloon flights, sharing a gondola with multiple other instruments. The alignment of the fully reflective optical system is performed during manufacturing, ensuring consistent performances over the wideband long wave spectral range of the infrared detector array. The quasi-monolithic design approach eases thermal constraints of instrument operation. The electronics controlling the instrument are developed towards further miniaturisation into a Multi-Processor System-on-Chip architecture, with the goal to process the data on the fly up to Level 1. GLORIA-Lite is capable of analyzing infrared emissions of more than 20 different molecules and aerosols in the atmosphere. The instrument is designed to enhance our understanding of dynamic and chemical processes occurring from the middle troposphere deep into the stratosphere. In times of accelerating climate change, it is particularly important to study the impacts on the middle atmosphere and to monitor them through long-term measurement series. Additionally, GLORIA-Lite serves as a technology demonstrator for the CAIRT satellite project, a proposed mission developed by the European Space Agency (ESA). CAIRT aims to bring these advanced atmospheric observing capabilities to a global scale. We will provide a detailed account of the instrument's technical development and characterization, along with the results obtained from retrieving geophysical parameters, such as trace-gas distributions, during its first flight.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: The Universal InfraRed Airborne Spectrometer (UNIRAS): Mid-to-Far-Infrared spectral radiance measurements from aircraft

Authors: Jonathan Murray, Prof Helen Brindley, Stephane Lantagne, Mikael Zubilewich, Oleg Kozhura, Arthur Zielinski, Dirk Schuettemeyer, Hilke Oetjen
Affiliations: National Centre for Earth Observation, Department of Physics,Imperial College London., ABB Canada, Facility for Airborne Atmospheric Measurements, National Centre for Atmospheric science, European Space Agency
New mission concepts, designed to provide new insights into controls on atmospheric composition, clouds and surface properties will push observational capacity towards higher precision and higher spatial resolution. With the advent of NASA’s PREFIRE and ESA’s FORUM missions a new observational window, covering the far-infrared (100-667 cm-1) will be investigated for the first time. Given these developments there is a clear need to develop the capability to deliver proxy observations that can (a) test and refine new concepts, including the retrieval methods being developed to deliver the level 2 products, and (b) offer a calibration validation framework for the level 1 radiances that are the actual measurand of the satellite instruments the UNIRAS initiative seeks to benefit. UNIRAS, UNiversal InfraRed Airborne Spectrometer, jointly funded by NERC and ESA, is capable of measuring spectrally resolved radiances in the 100 cm-1 to 1600 cm-1 wavenumber range and will be flown on the FAAM Airborne Laboratory's atmospheric aircraft, a modified Bae146. UNIRAS is a combination of two main units: a spectrometer, assembled at ABB, Canada, and the calibration and scene selection unit currently being assembled at Imperial College. The spectrometer design concept for UNIRAS is based on one of the industrial studies undertaken for FORUM phase A. This spectrometer design is a 4-port Fourier Transform Spectrometer, comprised of two input ports and 2 outputs ports. For UNIRAS, one of the input ports uses a steering mirror to select between calibration targets and the atmospheric signal of interest, the second input port is filled using a fixed temperature thermoelectrically cooled plate at -25C. The FORUM spectrometer employs uncooled DLaTGS detectors at each of the two output ports. To expand the UNIRAS instrument versatility beyond FORUM and PREFIRE, UNIRAS employs configurable detectors on the output ports, allowing a choice of 2 from a selection of 3 available detectors. These are an uncooled DLaTGS, 100 cm-1 – 1600 cm-1, similar to the FORUM detectors, a Stirling cooled longwave extended MCT, 400 cm-1 – 1600 cm-1 and a standard Stirling cooled MCT detector, 600 cm-1 – 1600 cm-1. This allows the user to switch between a Mid-to-Far-IR configuration, and an extended Mid-IR configuration, the latter offering higher temporal and spatial resolution and improved signal to noise. The spectrometer is scheduled to be delivered to Imperial College in early March 2025 where the front-end calibration system will be integrated, and ground testing will start. This presentation will provide first light spectral performance based on ground tests, including measurements of the noise equivalent spectral radiance (NESR), instrument line shapes, calibration strategy and ground-based zenith view measurements under available sky conditions. Finally, we will detail the objectives and plans for UNIRAS deployment.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: CAIRT Earth Explorer 11 candidate / Impact Study for volcanic ash

Authors: Ilaria Mazzotti, Enzo Papandrea, Umberto Rizza, Lorenzo Buriola, Marco Menarini, Piera Raspollini, Tiziano Maestri, Giorgia Proietti Pelliccia, Guido Masiello, Lorenzo Cassini, Stefano Corradini, Lorenzo Guerrieri, Björn-Martin Sinnhuber, Dr. Bianca Maria Dinelli
Affiliations: National Research Council of Italy-Institute of atmospheric sciences and climate (CNR-ISAC), National Research Council of Italy-Institute of atmospheric sciences and climate (CNR-ISAC), National Research Council of Italy-Institute of Applied Physics "Nello Carrara", Department of Physics and Astronomy “Augusto Righi”, University of Bologna, Department of Engineering, University of Basilicata, Department of Civil, Building and Environmental Engineering, University “La Sapienza”, National Institute of Geophysics and Volcanology, (INGV), Karlsruhe Institute of Technology (KIT)
Volcanic eruptions emit large amounts of gases and particles into the atmosphere, causing severe impacts on human health, environment and climate. Volcanic ash clouds are also well known for being dangerous for aviation, indeed the current regulations in Europe state that airlines must operate in concentrations of ash smaller than 2 mg/m³ (Beckett et al., 2020). It is therefore very important to individuate the ash plume, characterize its extension and geometry and its concentration. Geostationary instruments are routinely used to infer the spatial extent of volcanic ash clouds, their effective altitude and ash columnar abundance (Guerrieri et al., 2024). However, information on the vertical distribution and extent of the volcanic ash clouds is still missing. The Changing-Atmosphere InfraRed Tomography explorer (CAIRT) is one of the two candidates for the Earth Explorer 11 selection. One of the secondary objectives of CAIRT is to provide information on clouds. By exploiting the limb viewing geometry, CAIRT will allow to estimate the thickness of the plume that, combined with the ash columnar abundance retrieved from a geostationary instrument like SEVIRI, can be used to compute the ash concentration. In the frame of the CAIRT phase A studies, we have investigated whether, in case of volcanic eruptions, CAIRT can detect the ash cloud and provide information on the vertical extent of the volcanic clouds. The study is organized as follows: first, to characterize the sensitivity of the radiance measured by CAIRT to the main parameters of volcanic ash clouds, some simulations are run by using the GBB-clouds Radiative Transfer Model (RTM) (Dinelli et al., 2023). Successively, with the same RTM and using also the sigma-IASI RTM (Masiello et al., 2024), radiance fields as observed by CAIRT and SEVIRI are generated considering a realistic volcanic eruption (the Etna 23 November 2013 event) simulated with the WRF-Chem code (Grell et al., 2005). The assumed ash optical properties are obtained from Volz (1973) refractive index. Finally, the CAIRT and SEVIRI simulated data are used for the retrieval of ash cloud thickness and columnar abundance respectively and then merged to estimate the ash concentration. The latter parameter will be also cross-compared with the WRF model simulations of the same case study. References: Beckett, F.M., Witham, C.S., Leadbetter, S.J., Crocker, R., Webster, H.N., Hort, M.C., Jones, A.R., Devenish, B.J., Thomson, D.J., 2020. Atmospheric Dispersion Modelling at the London VAAC: A Review of Developments since the 2010 Eyjafjallajökull Volcano Ash Cloud. Atmosphere 11, 352. https://doi.org/10.3390/atmos11040352. Dinelli, B.M.; Del Bianco, S.; Castelli, E.; Di Roma, A.; Lorenzi, G.; Premuda, M.; Barbara, F.; Gai, M.; Raspollini, P.; Di Natale, G. GBB-Nadir and KLIMA: Two Full Physics Codes for the Computation of the Infrared Spectrum of the Planetary Radiation Escaping to Space. Remote Sens. 2023, 15, 2532. https://doi.org/10.3390/rs15102532. Guerrieri, L., Corradini, S., Theys, N., Stelitano, D., & Merucci, L. (2023). Volcanic Clouds Characterization of the 2020–2022 Sequence of Mt. Etna Lava Fountains Using MSG-SEVIRI and Products’ Cross-Comparison. Remote Sensing, 15(8), 2055. https://doi.org/10.3390/RS15082055/S1. Masiello, G., Serio, C., Maestri, T., Martinazzo, M., Masin, F., Liuzzi, G., Venafra, S., 2024. The new σ-IASI code for all sky radiative transfer calculations in the spectral range 10 to 2760 cm-1: σ-IASI/F2N. Journal of Quantitative Spectroscopy and Radiative Transfer 312, 108814. https://doi.org/10.1016/j.jqsrt.2023.108814 Volz, F. E., Infrared optical constants of ammonium sulfate, Sahara dust, volcanic pumice and fly ash. Appl. Optic. 12 564-568 (1973).
LPS Website link: CAIRT Earth Explorer 11 candidate / Impact Study for volcanic ash&location=Room+1.15/1.16" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.15/1.16)

Presentation: NH3 point source emissions and lifetimes derived from 15 years of IASI observations

Authors: Antoine Honet, Lieven Clarisse, Martin Van Damme, Cathy Clerbaux, Pierre Coheur
Affiliations: Université libre de Bruxelles (ULB), Brussels Laboratory of the Universe (BLU-ULB), Spectroscopy, Quantum chemistry and Atmospheric Remote Sensing (SQUARES), Royal Belgian Institute for Space Aeronomy (BIRA-IASB), LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS
Ammonia (NH3) is a short-lived atmospheric constituent with a lifetime of a few hours. Despite its devastating effects on air quality and the environment, its global concentration continues to increase. Knowledge of its diverse emission sources is crucial for guiding and implementing effective legislation. Particularly large NH3 emissions originate from animal feedlots and housings, and a variety of industries related to the production of e.g., synthetic fertilizers, coke, steel, and soda ash. Emissions from these super emitters are currently not well constrained in bottom-up inventories. Satellite measurements offer an attractive means for quantifying point sources. In this work, we present the latest version of the NH3 point source catalogue based on 15 years of IASI measurements. Over 750 hotspots were identified based on a wind-rotated supersampling technique that significantly increases the spatial resolution of the measurements beyond the native resolution of the sounder. These were subsequently categorized into 12 distinct source categories through careful study of visible imagery, publicly available inventories and various online sources. Specific region-dependent periods were excluded from the analysis to avoid the contributions of fires, enabling the identification of sources that are otherwise difficult to detect. Each source was classified based on its geographical extent into one of the following categories: point source, extended point source, or cluster of point sources. We estimated the atmospheric lifetimes and emissions for each, by fitting an Exponentially Modified Gaussian (EMG) function to the observed NH3 distributions. The results are analyzed as functions of geographical location, season, and source category. In addition, yearly emissions are derived and compared to those presented in the European Pollutant Release and Transfer Register (E-PRTR) for the sources in Europe. We quantify the uncertainty in our estimates by propagating uncertainties in the input parameters. This includes the systematic uncertainty originating from the satellite measurements and the uncertainty in the fitting parameters such as the fitting domain and the chosen reference wind speed. The estimated uncertainties are source-dependent and can therefore be used to identify the sources whose emissions and resulting NH3 residence time are tightly constrained, making these particularly useful for the evaluation of emission inventories.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Session: A.10.01 EO for Mineralogy Geology and Geomorphology

Earth observation is an important source of information for new and improved geological, mineralogical, regolith, geomorphological and structural mapping and is essential for assessing the impact of environmental changes caused by climatic and anthropogenic threats. Given the increasing demand for mineral and energy resources and the need for sustainable management of natural resources, the development of effective methods for monitoring and cost-effective and environmentally friendly extraction is essential.
In the past, the use of multispectral satellite data from Landsat, ASTER, SPOT, ENVISAT, Sentinel-2 or higher resolution commercial missions, also in combination with microwave data, has provided the community with a wide range of possibilities to complement conventional soil surveys and mineralogical/geological mapping/monitoring e.g. for mineral extraction. In addition, discrimination capabilities have been enhanced by hyperspectral data (pioneered by Hyperion and PROBA), which are now available through several operational research satellites and will be commissioned by CHIME.
The session aims collect contributions presenting different techniques to process and simplify large amounts of geological, mineralogical, and geophysical data, to merge different datasets and to extract new information from satellite EO data to support with a focus on mine site lifecycles.

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: VSWIR and TIR imaging spectroscopy data to characterize surface mineralogy over geothermal active area.

Authors: Federico Rabuffi, Simon J. Hook, Massimo Musacchio, Kerry Cawse-Nicholson, Malvina Silvestri, Maria Fabrizia Buongiorno
Affiliations: Jet Propulsion Laboratory, California Institute of Technology, 2Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Nazionale Terremoti
Parco Naturalistico delle Biancae (PNB), part of the Lardello geothermal field in Italy, is an area of interest both industrially and scientifically due to the continuing geothermal energy production. However, the use of remote sensing images to characterize surface mineral is challenging due to the limited bare soil exposure which primarily occurs over the geothermally active area. The aim of this study is to use a combination of Visible to ShortWave InfraRed (VSWIR: 0.4 to 2.5 μm) and Therma InfRared (TIR: 7.5 to 12 μm) imaging spectroscopy data to characterize surface mineral composition and generate complementary mineral and lithotype maps of the exposed portion of the PNB geothermal area, where geothermal features such as fumaroles and mineral alteration occurs at both regional and local scales. Datasets used for the analysis include VSWIR and TIR spectral libraries derived from laboratory measurements of field samples, that represent the main outcropping lithotypes, and remotely sensed spectroscopic data acquired by the Airborne Visible InfraRed Imaging Spectrometer - Next Generation (AVIRIS-NG) and the Hyperspectral Thermal Emission Spectrometer (HyTES). The AVIRIS-NG and HyTES data have very high spatial resolutions of 5.7 m and ~1 m respectively. The classification maps were derived using the Material Identification and Characterization Algorithms (MICA) from the United States Geological Survey (USGS) that allow identification of materials based on a set of diagnostic features in the reflectance spectra starting from the spectral libraries and remotely sensed data. The use of high resolution spectroscopy images in the VSWIR and TIR result in very detailed classification maps about the lithology and mineralogy outcropping in the area, with the identification of specific minerals such as, oxide mineral (hematite), clay minerals (alunite, kaolinite, smectite), hydrothermal silica, gypsum and sulfur. Furthermore, the availability of a high spatial resolution surface temperature maps (collected from a drone survey) allowed to verify the occurrences of mineral alterations with the location of the main active thermal area. This study provides an example of what should be possible using complementary information that can be retrieved from the several planned future satellite sensor systems that should provide VSWIR and TIR data including PRISMA-2, SBG-VSWIR and CHIME for the VSWIR and TRISHNA, SBG-TIR, LSTM and LandsatNext for the TIR
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Ladakh ophiolites: Martian analogue site mapped for degree of serpentinization using PRISMA hyperspectral satellite imagery and lab spectroscopy

Authors: Dr. Mamta Chauhan, Dr. Aakansha S. Borkar, Dr. Giorgio Antonino Licciardi, Patrizia Sacco, Dr. Deodato Tapete
Affiliations: Agenzia Spaziale Italiana (ASI), Indian Institute of Remote Sensing (IIRS), Indian Space Research Organisation (ISRO)
1. OPHIOLITES AS MARTIAN ANALOGUES Ladakh terrain with its unique geology has witnessed a sequence of magmatic activities, the oldest among them being ophiolites. This unique variety of igneous rock assemblage are generated by partial melting in the Earth’s mantle and exposed during obduction of oceanic lithosphere onto the continental crust. Their composition ranges from ultramafic to mafic rocks that are rich in ferromagnesium minerals such as olivine, pyroxene, plagioclase and oxides as spinel, chromite. They occur in association with sedimentary rocks represented by deep sea sediments present towards their top. The crust of differentiated planetary bodies including Mercury, Mars, Moon, Venus, Vesta are composed of mafic (e.g., basaltic) and ultramafic rocks. Ophiolite terraines with composition represented by lower oceanic crust and upper mantle section therefore serve as the most accessible terrains for detail characterization of these systems. These mafic and ultramafic rocks on reaction with water form serpentine along with heat and hydrogen gas, both being potential energy sources for chemosynthetic microorganisms [1, 2]. Serpentinization therefore, creates conditions amendable for both abiogenic and microbial synthesis of organic compounds. Serpentine-bearing outcrops have been detected on Mars using remote sensing [3]. 2. LADAKH OPHIOLITES AND STUDY AREAS The ophiolites of Ladakh form parts of Indus-Tsangpo suture zone (ITSZ) of Himalaya, the main tectonic units of the northern Himalayas that separates the Indian plate from the Eurasian plate. Ladakh ophiolites are emplaced during the late Cretaceous period and occur as remnants of Neo-Tethyan ocean’s that was closed during the Early Cretaceous period as a result of obduction [4, 5]. Among these exposed dismembered tectonic slices, the present research has selected Nidar and Spongtong Ophiolite Complexes belonging to south Ladakh group of ophiolites. Nidar Ophiolite Complex (NOC) (~32-33o N and 78o-79oE) lies towards SE Ladakh in the form of an eye-shaped exposure within Nidar valley. Whereas, Spongtang Ophiolite Complex (SOC) (~33-34 º N and 76º39’- 76º54’E) lies in the district of Leh, ~30 km south of the Indus-Tsangpo suture zone near to Photoskar [4,6]. These complexes display the complete ophiolite sequence from ultramafic- mafic to volcano-sedimentary rocks [6,7]. 3. DATA AND METHODOLOGY The present study has primarily utilized hyperspectral data from PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission for mineral mapping. In addition, multispectral data from LANDSAT-8 OLI, ASTER-TIR and spectroradiometer have also been used for characterizing lithology. PRISMA is a pushbroom imaging spectrometer launched recently (2019) by the Italian Space Agency (ASI). It provides hyperspectral images of over a swath of 30 x 30 km per each frame, in 400-2500 nm continuous spectral range, at 30m spatial resolution in 240 spectral channels, 10nm spectral resolution along with corresponding panchromatic image of 5m spatial resolution [8, 9]. This study has utilized the atmospherically corrected L2D (Geolocated Surface Reflectance data cube) product. The available cloud free coverage scenes were downloaded from ASI’s official PRISMA portal. The processing of PRISMA data included bad band removal, size reduction (subset) of the data (both spatially and spectrally), noise and dimensionality reduction, extracting pure pixels, collecting endmembers and matching with standard spectral library. The softwares used were ENVI® 5.3, ArcGIS® 10.4 and Python programming. Additionally, the spectra obtained from the hyperspectral images were compared with field spectra derived from rock samples collected from the field to calibrate and validate. 4. RESULTS Multispectral data analysis of band composites and spectral indices and the generated (mafic, quartz, carbonate, ultramafic) maps for NOC and SOC allowed highlighting and discriminating prominent lithology dominated by mafic, ultramafic, carbonate, serpentines and other hydrothermal altered phases (clay minerals). Spectral analysis from PRISMA data enabled identification and mapping of the dominant primary and altered minerals based on their unique spectral signatures in visible and near infrared region. Furthermore, application of machine learning-based classification approach helped in characterizing the various lithological units present based on the spectral variability. The degree of serpentinization has been assessed using hyperspectral data wherein the absorption depths of olivine/pyroxene (900-1100nm), OH (1400 & 1900nm) and serpentine minerals (2300nm) were collected and, together with the results from field based spectroscopic data, were utilized mathematically to compute the percentage of the degree of serpentinization based on the relative proportion of primary and altered phases. 5. DISCUSSION AND CONCLUSIONS The study of serpentinization through specific spectral wavelengths relies on the identification of distinctive absorption features that correspond to the mineralogical changes occurring during the process. The degree of serpentinization refers to the extent to which an ultramafic rock has undergone serpentinization and is positively correlated with the depth of absorption features at 1000 and 2300nm. As serpentinization increases, there is a corresponding rise in overall reflectivity and the depth of these absorption bands. The study of Ladakh Ophiolites and associated serpentinization is useful to understand the geological and chemical processes that occurs over Mars surface. The possibility of the linkage between biological processes and the precipitated secondary phases makes this terrain remarkable to investigate habitability conditions of Mars. Finally, the study underscores the added value brought by hyperspectral satellite data collected with PRISMA, enhancing the discrimination capabilities. The achieved results show what could be possible to perform in future over wider remote regions as soon as new global hyperspectral missions such as CHIME become operational. Acknowledgement: This research was part of the ASI-ISRO joint Earth Observation Working Group (EOWG) [10, 11], project HYP_4 MARTIAN ANALOGUES. The study has used data from ORIGINAL PRISMA Products© Italian Space Agency (ASI) delivered under an ASI License to the User. 6. REFERENCES [1] Kelley, D.S. et al. “A serpentinite-hosted ecosystem: the lost city hydrothermal field”. Science. 307, 1428-1434. 2005.http://dx.doi.org/10.1126/science.1102556. [2] Sleep, N.H., Meibom, A., et al. “H2-rich fluids from serpentinization: geochemical and biotic implications”. Proc. Natl. Acad. Sci. USA101, 12818-2823. 2004. [3] Ehlmann, B. L., et al. “Identification of hydrated silicate minerals on Mars using MRO-CRISM: Geologic context near Nili Fossae and implications for aqueous alteration”. J. Geophys. Res., 114, E00D08, 2009. doi:10.1029/2009JE003339. [4] Chauhan, M., Sur, K., Chauhan, P. et al., “Lithological mapping of Nidar ophiolite complex, Ladakh using high-resolution data.” Adv. Space Res., 73 (08),4091-4105, 2024. [5] Gansser, A. “Geology of the Himalayas” Wiley Interscience, London 289 p, 1964. [6] Thakur, V.C. and Misra, D.K. “Tectonic framework of Indus and Shyok Suture Zones in eastern Ladakh. Northwest Himalaya”. Tectonophysics, 101, 207-220, 1984. [7] Catlos, Elizabeth J., et al. "Nature, age and emplacement of the Spongtang ophiolite, Ladakh, NW India." J. Geol. Soc., 176.2, 284-305, 2019. [8] Loizzo, R., Daraio, M., et al. “Prisma Mission Status and Perspective.” IEEE International Geoscience and Remote Sensing Symposium, 4503-4506, 2019. [9] Caporusso et al., “The Hyperspectral Prisma Mission in Operations,” 2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 3282-3285, 2020. [10] Tapete, D, Kumar Jaiswal, R, et al. “Scientific research and applications development based on exploitation of PRISMA data in the framework of ASI – ISRO Earth Observation Working Group Hyperspectral Activity.” IEEE International Geoscience and Remote Sensing Symposium, 1648-1651, 2023. 979-8-3503-2010-7/23. [11] Tapete, D., Kumar Jaiswal, R., et al. “ASI – ISRO cooperation in Earth Observation: status, achievements and new avenues”. 75th International Astronautical Congress (IAC), Milan, Italy, 14-18 October 2024, IAC-24-B1.1.4, 12 pp.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Improving mine lifecycle monitoring using advanced InSAR phase closure approaches

Authors: Fei Liu, Dr Rachel Holley
Affiliations: Viridien
Interferometric Synthetic Aperture Radar (InSAR) is a remote sensing technique that measures phase differences between two or more Synthetic Aperture Radar (SAR) images, to retrieve the surface displacements between the acquisition times with millimetre-level precision. Given its reliability and cost-effectiveness, InSAR has been increasingly used throughout the mine lifecycle for monitoring ground deformation. However, due to the decorrelation noise often caused by surface changes across parts of a mine site over time, InSAR phase measurements may become unreliable in some areas. The noisy pixels in the interferograms have to be identified and masked to prevent the propagation of errors into surrounding pixels during further processing (e.g., filtering and phase unwrapping). This results in loss of coverage, which remains one of the main limitations of current InSAR mine site monitoring. Here, we propose an innovative approach to effectively select high-quality pixels and improve the coverage of InSAR results, based on the property of phase closure residual. Phase closure is a property of a closed loop formed by three interferograms (e.g., AB, BC, and CA) between three SAR images (A, B, and C). In multilooked images the sum of phases around the loop (the closure) is non-zero due to the decorrelation noise, and its residual (the non-zero part) is related to the characteristics of the surface change. By analysing the temporal evolution of phase closure residuals, we can evaluate the quality of each pixel and track its changes over time. Based on the pixel quality evaluation, we can improve the coverage of InSAR measurements and the resulting measurement uncertainties. We apply this new phase closure approach across a range of mine sites, and find significant coverage and measurement quality improvements compared to the conventional InSAR results. This new approach does not involve any assumptions of the deformation regime, or heavy computations – our results show that it works for different datasets (X, C, and L band SAR data) and is efficient even for large volume datasets. In summary, this advanced phase closure approach extends the versatile capabilities of InSAR in mining sector, emphasizing its role in proactive monitoring and risk management.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: Assessment of machine learning methods for mineral mapping using different hyperspectral satellite systems

Authors: Saeid Asadzadeh, Anna Buczyńska, Raymond Kokaly, Sabine
Affiliations: German Research Centre for Geosciences GFZ
With the increasing availability of spaceborne hyperspectral imaging data, there is an immediate need for sensor-agnostic algorithms to map the diversity, composition, and quantity of minerals on exposed Earth's surface, which includes approximately one-third of the land masses. The USGS Material Identification and Characterization Algorithm (MICA) is an established method for mineral mapping; an expert system designed to identify minerals in airborne and spaceborne hyperspectral imaging data using a custom spectral library. With the recent development in machine learning and deep learning techniques, an important question is how effectively these algorithms can be trained/tailored for automated mineral classification/mapping on a global scale. To address this question, we acquired and processed hyperspectral imaging datasets from the Cuprite test site using machine learning methods comparing the results with those yielded from the MICA algorithm applied to the AVIRIS-Classic airborne systems. The sensors investigated include the Environmental Mapping and Analysis Program (EnMAP), Earth Surface Mineral Dust Source Investigation (EMIT), PRecursore IperSpettrale della Missione Applicativa (PRISMA), Hyperspectral Imager Suite (HISUI), and the Advanced Hyperspectral Imager aboard Chinas GaoFen-5 satellite. The employed algorithms encompass Random Forest (RF), Extra Trees (ET), K-Nearest Neighbors (K-N), Support Vector Machine (SVM), and the U-Net deep learning method. Training and testing data were obtained from MICA-generated maps applied to ground-adjusted AVIRIS-C data, covering six distinct mineral/mixed classes in the VNIR and fourteen in the SWIR ranges. The performances of the algorithms were evaluated using overall accuracy and Kappa coefficient metrics. The comparison of results indicated that the SVM algorithm with a polynomial kernel gives results closest to the MICA products for all sensors in both VNIR and SWIR ranges. The overall accuracy and Kappa coefficient remained above 90%, regardless of the sensor type and noise level. The best performance was observed for minerals with distinctive absorption features in the SWIR, although mixed classes such as calcite + montmorillonite and pyrophyllite + kaolinite showed the lowest classification accuracy. By applying the same algorithms to standard and ground-adjusted reflectance data, it was observed that the SVM-polynomial (and the majority of other techniques) is insensitive to the quality of atmospheric correction, meaning that good results could be obtained using standard reflectance products. However, for sensors with accurate atmospheric correction and high SNR, such as EnMAP, it yields higher classification accuracy. By decreasing the number of pixels used for training, it was observed that, in contrast to the deep learning algorithm, the SVM-polynomial maintained its good performance even with a fraction (25%) of the original training data. This study indicates that sensor-agnostic algorithms such as SVM, can be effectively used for mapping mineralogy from spaceborne hyperspectral data. We now plan to evaluate the performance of the same algorithms using datasets acquired over Cuprite but at different times, and from different areas, to evaluate the stability of the models across space and time. We also aim to include more deep learning methods to identify the most promising algorithm for mineral mapping. The ideal algorithm should not rely on training data from the scene, have the capability for pre-training with a limited number of inputs, and be robust to noise and perform fast to enable global-scale mineral mapping.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.14)

Presentation: A Geological System Analysis for Better Understanding of Mineral-rich Sedimentary Basins by Using Multistage Data and Space-based Imaging Systems: A Case Study in Türkiye

Authors: Tamer Özalp
Affiliations: Researchturk Space Co.
Tertiary basins characterized by lacustrine, fluvial, and volcanic facies are prevalent in the interior and western regions of Anatolia. These basins are significant not only for their contributions to the understanding of the tectonic evolution and depositional history of the area but also for the economic mineral deposits such as bentonite and zeolite they contain. However, the absence of traditional exploration data hinders the enhancement of the basin's economic potential and the identification of tectono-sedimentary events. The study focuses on the geological system analysis of the Kalecik-Hasayaz Basin in Ankara, Turkey, utilizing space-based imaging systems (optical and radar) to enhance the understanding of mineral-rich sedimentary basin evolution. A comprehensive geological study was designed to assess various parameters such as stratigraphy, lithology, structure and topography of the area. The exploration for economic mineral deposits relies on specific parameters and criteria that either directly indicate or suggest surface conditions favorable to the formation of such deposits. The research emphasized the use of imaging radar for geological mapping, successfully distinguishing key rock units and formations through spectral (optical)and physical(radar) signatures. Optical data were processed to facilitate the detection and identification of rock types and structural features, as well as to correlate their spatial distributions with independent data regarding their structural configurations. Techniques such as band ratios were employed to create color ratio composite images for better lithofacies discrimination. Radar imagery proved to be particularly advantageous for two primary purposes: first, it aided in delineating the unique lithological characteristics and boundaries of surface mapping necessary for the establishment of geological formations; second, it enabled the identification of structural signatures and lithological exposures through the spatial signatures derived from Synthetic Aperture Radar (SAR) imagery. The three-dimensional sensing capabilities of radar technology enhanced the analysis of surface morphology, thereby improving the spatial mapping of radar-derived information pertaining to the study area. Surface back-scattering profiles and roughness changes, 3-D SAR models were used to better determining the rock unit maps and tectonic features with fault indications. The characterization of the underlying materials and structures can be effectively delineated using C-band SAR, provided that the appropriate physical conditions are met. The radar imagery has proven to be an effective tool for delineating geological characteristics in the region under investigation. The integration of image data obtained across various wavelength bands significantly augmented the geological information available for the study area. The SAR and optical data results demonstrated the links between surface developments and remote sensing in the visible, infrared, and microwave spectra. The results indicated that a multistage approach is the most effective method for comprehending the geological evolution of the area, with results aligning well with field observations and existing geological literature. By employing advanced imaging techniques, the research aims to provide insights into the geological processes and historical developments of the basin, contributing to a more comprehensive understanding of sedimentary environments. The findings from this case study could have implications for broader geological research and applications in the region.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Session: C.05.03 ALTIUS: ESA's Ozone Mission

In the 1970s, scientists discovered that the ozone layer was being depleted, particularly above the South Pole resulting in what is known as the ozone hole. To address the destruction of the ozone layer the international community established the Montreal Protocol on ozone-depleting substances. Since then, the global consumption of ozone-depleting substances has reduced by about 98%, and the ozone layer is showing signs of recovering. However, it is not expected to fully recover before the second half of this century. It is imperative that concentrations of stratospheric ozone, and how they vary according to the season, are monitored continually, to not only assess the recovery process, but also for atmospheric modelling and for practical applications including weather forecasting.
The Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) mission fills a very important gap in the continuation of limb measurements for atmospheric sciences. The ALTIUS mission will provide 3-hour latency near-real time ozone profiles for assimilation in Numerical Weather Prediction systems, and consolidated ozone profiles for ozone scientific analysis. Other trace gases and aerosols extinction profiles will also be provided.
The focus of this session is the mission and its status, together with the implemented technical and algorithmic solutions to image the Earth limb and retrieve the target chemical concentration, as well as the ongoing preparations for the calibration/validation of the mission products.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS O3, NO2 and aerosol extinction retrieval algorithms and expected in-flight performance

Authors: Antonin Berthelot, Noel Baker, Didier Fussen, Pierre Gramme, Nina Mateshvili, Didier Pieroux, Kristof Rose, Sotiris Sotiriadis, Emmanuel Dekemper
Affiliations: Bira-iasb
ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) is an atmospheric limb mission being implemented in ESA's Earth Watch programme. The mission is in its implementation phase, with both the space and ground segments having reached the critical design review (CDR). The launch is foreseen on a Vega-C rocket in 2026-2027, in a dual configuration with another ESA mission, FLEX. The primary objective of the mission is to provide near-real-time and consolidated high-resolution stratospheric ozone concentration profiles. Secondary objectives include stratospheric aerosols, H₂O, NO₂, NO₃, temperature, OClO, BrO, and mesospheric ozone, as well as the detection of polar mesospheric and stratospheric clouds (PMCs and PSCs). The instrument consists of three spectral imagers: UV (250-355 nm), VIS (440-675 nm) and NIR (600-1020 nm) channels. Each channel is able to take a snapshot of the scene independently of the other two channels, at a desired wavelength and with the requested acquisition time. It comes with excellent vertical sampling (<1km at the tangent point), and allows straightforward in-flight pointing calibration, usually a key driver of the error budget of limb instruments. The agility of ALTIUS allows for series of high vertical resolution observations at wavelengths carefully chosen to retrieve the vertical profiles of species of interest. ALTIUS is a single payload mission which gives numerous options for the observation scenarios. ALTIUS will perform measurements in different geometries to maximize global coverage: observing limb-scattered solar light in the dayside, solar occultations at the terminator, and stellar, lunar, and planetary occultations in the nightside. The baseline mission plan combines 100 limb-scatter observations on the day side, 2 solar occultations, and 5 stellar/planetary/lunar occultations in the night side (typical numbers). We will present the mission, focusing on its relevance for the stratospheric ozone community. Limb-scatter and occultation retrieval algorithms will be presented, and the expected performance of the mission will be discussed based on end-to-end simulations. Additionally, the current status of the retrieval algorithms of the secondary objective species will be discussed, with a focus on NO2 and aerosols. The data from previous UV-VIS-NIR limb instruments will also be processed by the ALTIUS L2-processor to assess the expected in-flight performance of ALTIUS. A comparison with the results from GOMOS/ENVISAT, OMPS-LP/JPSS-2, and SAGE-III/ISS will be presented.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS Mission: Project Status

Authors: Daniel Navarro Reyes, Michael François, Luciana Montrone, Stefano Santandrea, Hilke Oetje, Didier Fussen, Emmanuel Dekemper
Affiliations: ESA/ESTEC, BISA
ALTIUS, ESA’s ozone missions, is an atmospheric limb sounder mission for monitoring of the distribution and evolution of stratospheric ozone number density profiles in support of operational services and long-term trend monitoring. The mission will provide detailed stratospheric ozone profile information at high vertical resolution, which will add valuable information to total column ozone used in data assimilation systems by operational centres. Secondary products, i.e. vertical profiles of NO2, BrO, OClO, NO3, H2O, mesospheric O3, aerosol extinction, and temperature will be also provided. Currently there are only few atmospheric satellite missions operational that provide limb measurements and several of them might terminate within the next few years. ALTIUS will fill therefore an upcoming data gap. The ALTIUS data will also be of high importance for the atmospheric chemistry modelling community, for use as input to climate models and their validation. ALTIUS data will extend the existing GCOS (Global Climate Observing System) ozone profile ECV (Essential Climate Variable) as produced with the ESA CCI (Climate Change Initiative) ozone project. The ALTIUS mission is under implementation by ESA within the Earth Watch Program, with participation of Belgium, Canada, Luxembourg, and Romania. The launch is expected in April 2027. The current status and planning with respect to flight model manufacturing, ground segment qualification and deployment, launch, commissioning, validation, and routine operations will be presented. An Announcement of Opportunity for calibration/validation will be called in second half of 2025; details will be communicated in oral presentations/posters during the Living Planet Symposium 2025.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS: The Next Generation of Atmospheric Limb Sounders

Authors: Emmanuel Dekemper, Noel Baker, Antonin Berthelot, Didier Fussen, Pierre Gramme, Nina Mateshvili, Didier Pieroux, Kristof Rose, Sotiris Sotiriadis, Adam Bourassa, Doug Degenstein, Daniel Zawada, Michael François, Luciana Montrone, Daniel Navarro-Reyes, Hilke Oetjen
Affiliations: BIRA-IASB, University of Saskatchewan, ESA-ESTEC
The atmospheric limb sounding community is bracing itself for an era of limited availability of new measurements of atmospheric trace gases concentration profiles with high vertical resolution. Indeed, several limb sounding satellite instruments are about to put an end to their exceptionally long record of observations. Soon, OMPS-LP and SAGE-III will remain as the main providers of O3, aerosols, temperature (both), and NO2, H2O (SAGE-III only) profiles. Still, the stratosphere keeps changing with, for instance, different recovery rates of the O3 layer between the lower and upper stratosphere, or its cooling and moistening caused by the Hunga Tonga eruption. ALTIUS is ESA's upcoming Earth atmospheric limb mission. The primary objective of the mission is to provide near-real-time and consolidated stratospheric ozone profiles. Secondary objectives include stratospheric aerosols, H2O, NO2, NO3, temperature, OClO, BrO, and mesospheric ozone. The mission is in its implementation phase, with both the space and ground segments having reached the critical design review (CDR). The launch is foreseen on a Vega-C rocket in 2027. The mission has some unique features intended to better tackle the common problems faced by previous UV-VIS-NIR limb sounders. First, it is a single payload mission on an agile platform, giving therefore many options for the observation scenarios. The baseline mission plan combines 100 limb-scatter observations on the day side, 2 solar occultations, and 5 stellar/planetary/lunar occultations in the night side (typical numbers). Second, the instrument is a three-channels spectral imager with tuneable capability from 250nm to 1020nm. It comes with excellent vertical sampling (<1km at the tangent point), and allows straightforward in-flight pointing calibration, usually a key driver of the error budget of limb instruments. We will present the mission, focusing on its relevance for the stratospheric ozone community. Synergies with the existing and future limb sounders will be discussed, as for example on the complementarity of the spatial coverages. A crucial point for the stratospheric community is the extension of the time series that started more than 30 years ago with SAGE-II. One key application of the consolidated scientific products of the mission is to contribute to this uninterrupted record.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: ALTIUS Geophysical Validation Plan

Authors: Jean-Christopher Lambert, Dr Steven Compernolle, Dr Daan Hubert, Dr Tijl Verhoelst, Dr Quentin Errera, Dr Arno Keppens, Dr Antje Inness, Dr Natalya Kramarova, Prof Kaley Walker, Prof Kimberly Strong, Dr Robert Koopman, Dr Daniel Navarro-Reyes, Dr Hilke Oetjen, Dr Claus Zehner
Affiliations: Royal Belgian Institute for Space Aeronomy (BIRA-IASB), European Centre for Medium-Range Weather Forecasts (ECMWF), National Aeronautics and Space Administration Goddard Space Flight Center (NASA/GSFC), Department of Physics, University of Toronto, European Space Agency (ESA-ESTEC), European Space Agency (ESA-ESRIN)
Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) is a gap filler mission responding to the pressing need to ensure, after the upcoming termination of historical limb missions, the global, long-term monitoring of stratospheric ozone, other trace gases and aerosols at the vertical resolution of 1 km. Implemented as an ESA Earth Watch mission for the 2026-2030 period, ALTIUS will contribute near-real-time data to the Copernicus Atmosphere Monitoring Service (CAMS) and the Belgian Assimilation System of Chemical ObsErvations (BASCOE). ALTIUS also aims at providing consolidated ozone data records to the Copernicus Climate Change Service (C3S) and to international assessments sponsored by the World Meteorological Organization (WMO) and the World Climate Research Programme (WCRP), as well as new research data needed for a better understanding of polar processes and the upper atmosphere. This contribution describes the plan envisioned for the geophysical validation of the ALTIUS profile data for stratospheric O₃, NO₂, H₂O, BrO, OClO, NO₃, aerosols, polar stratospheric and mesospheric clouds, mesospheric O₃, and temperature. After an overview of the mission and user requirements against which ALTIUS data will be validated, the overall validation approach will be described, which will combine: (i) comparisons to Fiducial Reference Measurements collected from ground-based monitoring networks (CANDAC, NDACC, SHADOZ…) and during dedicated validation campaigns, (ii) cross-validation with other profiling satellites extending the ground-based validation to the global domain, and (iii) quality assessments using modelling support from the CAMS and BASCOE data assimilation systems and from the OSSSMOSE metrology simulator. A dedicated validation service will provide (i) baseline monitoring of ALTIUS ozone data quality performed by an operational validation system, and (ii) in-depth validation to support the evolution of the data products and associated retrieval algorithms. The operational validation element for ozone data products will be complemented by validation activities to be proposed by the scientific community in response to the upcoming ESA Announcement of Opportunity (AO) for the calibration and validation (Cal/Val) of ALTIUS. The AO Call encompasses the validation of other data products than ozone and aims to open the ALTIUS Cal/Val to a wider range of external data and activities.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Preparations at ECMWF for the use of ALTIUS data within CAMS

Authors: Christopher Kelly, Roberto Ribas, Antje Inness, Richard Engelen, Johannes Flemming, Martin Suttie
Affiliations: ECMWF
The Copernicus Atmosphere Monitoring Service (CAMS), operated by the European Centre for Medium-Range Weather Forecasts (ECMWF) on behalf of the European Commission, provides daily analyses and 5-day forecasts of atmospheric composition as well as reanalysis datasets covering past years at global and regional scales. Satellite observations of various trace gases including ozone are routinely assimilated into the CAMS system in support of these products. The continuity of high-quality total-column ozone observations for data assimilation is covered by the Sentinel missions (Sentinel-4, -5 and -5 precursor) for years to come. However, there is an impending break in the availability of high-quality ozone limb observations in the stratosphere when the long-serving Aura-Microwave Limb Sounder (MLS) mission ends. Stratospheric ozone limb observations are particularly valuable to the CAMS system, providing a crucial insight into the vertical structure of ozone in the atmosphere. The ALTIUS mission is set to provide the next generation of ozone limb observations, enabling a timely transfer from the assimilation of MLS stratospheric ozone profiles to the use of ALTIUS stratospheric ozone profiles within CAMS. In this presentation, we explain how CAMS satellite observations are used at ECMWF and provide a specific update on the technical preparations being made for the use of ALTIUS data. The data flow of CAMS satellite observations is divided into three key stages. Firstly, acquisition – the seamless transfer of observations from data provider to ECMWF ahead of the two 12-hour assimilation windows that are run each day. Secondly, ingestion & pre-processing – the transformation of observations from their native format into Binary Universal Form for the Representation of meteorological data (BUFR) format using bespoke converter-decoder software. Finally, analysis & forecast – the use of data assimilation to combine the observations with a state-of-the-art atmospheric composition model powered by a supercomputer. ALTIUS pre-launch test data has provided an essential resource for the mission preparation at ECMWF. Our discussion highlights the key technical challenges from this work, such as converting the profiles to partial columns, handling data from different viewing geometries and working with the synthesis data product.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.11/0.12)

Presentation: Accounting for surface reflectivity inhomogeneities in stratospheric ozone retrieval from limb scattering observations

Authors: Carlo Arosio, Alexei Rozanov, Vladimir Rozanov, Andrea Orfanoz-Cheuquelaf, John Burrows
Affiliations: Institute of Environmental Physics, University of Bremen
This study investigates and mitigates a retrieval artefact identified in tropospheric ozone column data and ozone limb profiles retrieved from OMPS-LP observations at the University of Bremen (IUP). This artefact is associated with inhomogeneities in the surface reflectivity along the satellite line of sight (LOS). At IUP, a tropospheric ozone column (TrOC) product has been produced by exploiting the limb-nadir matching technique applied to OMPS observations. In this data set, we noticed an artefact in the tropical Pacific region, i.e. higher ozone columns in the [0°N, 5°N] latitude band, where the tropospheric ozone is expected to be fairly homogeneous. This issue was traced back to the stratospheric profiles, which show a lower ozone content at their peak altitude. This feature is also visible in the Atlantic, though less pronounced, and exceeds the typical uncertainty of the TrOC, being of the order of 5-7 DU. Other stratospheric ozone column (SOC) and TrOC data sets, e.g. the NASA OMPS and SCIAMACHY TrOC products show a similar pattern in the tropical Pacific. In preliminary studies we associated this pattern with the semi-permanent presence of the Inter-Tropical Convergence Zone (ITCZ), a region of high surface reflectivity crossing the satellite LOS. The present contribution belongs to the ESA ENFORCE project, which has the aim of implementing in the radiative transfer model SCIATRAN at IUP the possibility of taking into account variations of the surface reflectivity along the satellite LOS (2D mode) to mitigate the described artefact. The final goal is the improvement of the TrOC product derived from satellite limb scattering measurements, and the outcome could be of interest for any limb scattering instrument, e.g. SCIAMACHY and ALTIUS. In this presentation, we show the first results of the retrievals performed using the SCIATRAN 2D mode. First, we used simulated case studies to better investigate the impact of different idealized distributions of surface reflectivity on the retrieved profiles. Then, we compare the results obtained with the SCIATRAN 2D mode on a subset of OMPS observations with the standard 1D SCIATRAN retrievals and with collocated MLS observations. Finally, we address the impact of the implemented correction on TrOC derived using the limb-nadir matching technique.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:30 (Hall G2)
Tuesday 24 June 11:30 - 13:00 (Hall E2)

Session: A.06.01 Geospace dynamics: modelling, coupling and Space Weather - PART 2

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: The Spectral Shape of Auroral Plasma Turbulence and its Relation to GPS Scintillations

Authors: Magnus Ivarsen, Professor Glenn Hussey, Professor Jean-Pierre St-Maurice
Affiliations: University Of Saskatchewan
At times, turbulence permeates geospace, Earth's plasma environment, and the turbulence may also be present at all available scale-sizes. The phenomenon has been studied for decades, but reliable multi-scale measurements of auroral turbulence are still hard to come by. In this presentation, I will present new measurements of a composite spectrum of plasma turbulence near the aurora borealis, on scale-sizes ranging from 10 meters to 10 kilometers. Through space-ground conjunctions we are able to directly connect the composite spectrum to structuring in the large-scale electrical currents that flow with the aurora. We discuss the topic, concuding that a characteristic turbulent shape seems to follow the ionosphere-magnetosphere mapping closely.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Observations of Plasma Structures of Varying Scale Size in the High-Latitude Ionosphere with a Suite of Instrumentation

Authors: Sophie Maguire, Alan Wood, David Themens, Derek McKay
Affiliations: University Of Birmingham, Sodankylä Geophysical Observatory
Within the high-latitude ionosphere, large-scale plasma structures, such as polar cap patches and blobs, have been observed. These large-scale structures can seed smaller-scale irregularities in the presence of instability mechanisms. It is these smaller-scale irregularities which can lead to the scintillation of trans-ionospheric radio signals, such as those used for Global Navigation Satellite Systems (GNSS). Irregularities which lead to scintillation are on much smaller scale sizes than high-latitude structuring such as polar cap patches. Thus, the Scales of Ionospheric Plasma Structuring (SIPS) experiment was conducted in January 2024 to observe the multi-scale ionosphere and its effects for scintillation. Given that the aim of the SIPS experiment was to observe the ionosphere across various scale sizes, an extensive suite of instrumentation was needed. This experiment utilised a variety of both space-based and ground-based instrumentation, including the European Space Agency’s Swarm satellites, incoherent scatter radars and radio telescopes, in combination with data modelling techniques. In this experiment, the large-scale structures were observed using the European Incoherent SCATter (EISCAT) radars, the medium-scale structures with the Kilpisjärvi Atmospheric Imaging Receiver Array (KAIRA), and the smaller-scale structures from the Swarm satellites and GNSS receivers. The combination of these instruments in conjunction with modelling techniques gives unprecedented coverage of the varying scale sizes which is not possible with individual instrumentation alone. This presentation showcases the results from this experiment, explaining the relationship between structures of varying scale sizes and their associated scintillation effects.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Swarm-VIP-Dynamic: Models for Ionospheric Variability, Irregularities Based on the Swarm Satellite Data

Authors: Alan Wood, Wojciech Miloch, Yaqi Jin, Daria Kotova, Gareth Dorrian, Lucilla Alfonsi, Luca Spogli, Rayan Imam, Eelco Doornbos, Kasper van Dam, Mainul Hoque, Jaroslav Urbar
Affiliations: Space Environment and Radio Engineering (SERENE) group, University of Birmingham, Department of Physics, University of Oslo, Istituto Nazionale di Geofisica e Vulcanologia, The Royal Netherlands Meteorological Institute (KNMI), German Aerospace Center (DLR), Institute of Atmospheric Physics CAS
The ionosphere is a highly complex plasma containing electron density structures with a wide range of spatial scale sizes. The variability and structuring of this plasma depends on forcing from above and below. Coupling of the ionosphere with the Earth’s magnetosphere and the solar wind, as well as to the neutral atmosphere, makes the ionosphere highly dynamic and highly dependent on the driving processes. Thus, modelling the ionosphere and capturing its full dynamic range considering all spatiotemporal scales is challenging. Swarm is the European Space Agency’s (ESA) first constellation mission for Earth Observation (EO), comprising multiple satellites in Low Earth Orbit (LEO). Numerous data products are available, including measures of the ionosphere at a range of spatial scales and the density of the thermosphere. These data products mean that Swarm is uniquely placed to investigate coupling between the dynamic ionosphere and the neutral atmosphere. The Swarm-VIP-Dynamic project started in early 2024 and it focuses on the variability, irregularities, and predictive capabilities for the dynamic ionosphere. In this project, we develop a suite of models for capturing the ionosphere structuring and dynamics at various spatiotemporal scales. In addition to the Swarm data, we will use datasets from other satellites and ground-based instruments for validation and to explore the added value of space instrumentation with various observation and sampling characteristics. We will also test the feasibility of the models to be used in a real-time environment. Recent results from the Swarm-VIP-Dynamic project are presented, including the model concepts, as well as prospects of further development in the context of space weather and predicting ionospheric space weather effects.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: A Decade-long Model of the Fast-varying Ionospheric and Magnetospheric Magnetic Fields Constrained by Ground and Satellite Observations

Authors: Jingtao Min, Dr. Alexander Grayver
Affiliations: ETH Zurich, University of Cologne
The time-varying geomagnetic field is a superposition of contributions from multiple internal and external current systems. A major source of geomagnetic field variations at periods less than a few years is the current systems external to the solid Earth, namely the ionospheric and magnetospheric currents, as well as associated currents induced in the Earth’s mantle. Understanding these current systems is at the centre of geospace modelling and space weather studies, and is also crucial for electromagnetic induction studies of the Earth interior. We present here the reconstructed decade-long models of the ionospheric, magnetospheric and induced magnetic fields for the time period 2014 - 2023. While the separation of these three sources is mathematically underdetermined using either ground or satellite measurements alone, it is made tractable with our new geomagnetic field modelling approach which combines both ground and multi-satellite datasets. Our modelling approach is not confined to data from specific magnetic conditions or local times, nor does it impose harmonic behaviour in time, as is typical in previous models. The resulting new field models provide continuous time series of the ionospheric, magnetospheric and induced field spherical harmonic coefficients, covering all local times and magnetic conditions, without any prescribed time harmonic behaviour. These new time series unravel complex non-periodic dynamics of the external magnetic fields during global geomagnetic storms, as well as periodicities in the magnetospheric and ionospheric magnetic fields associated with solar activities and lunar tides, respectively. As such, our new model contributes to a better picture of the dynamics of the external current systems and magnetosphere-ionosphere interactions. Our new modelling approach is highly versatile and flexible, allows for on-the-fly estimation and generation of geomagnetic field models of high temporal resolution. The new approach and the published model will hence be relevant for space physics, and can facilitate space weather operations.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Equatorward Closure of Region 2 Birkeland Currents

Authors: David Knudsen, Yhihenew Getu
Affiliations: University Of Calgary
The classic picture of the Birkeland current system includes a poleward (R1) and an equatorward (R2) sheet at most local times [Iijima and Potemra, 1976a], with an additional poleward sheet near noon [Iijima and Potemra 1976b] and midnight (in the Harang region). Away from noon and midnight, the R1/R2 currents are generally considered to form a nearly-balanced pair, with a fraction of the R1 currents closing across the polar cap, and R2 comprising the Birkeland system’s equatorward boundary. However, using precision magnetic field measurements from Swarm, we find that approximately 20% auroral zone traversals display evidence of an additional sheet equatorward of R2, occurring at all local times, and having the opposite polarity of R2, indicating partial closure of the R2 sheet in the equatorward direction. In this presentation we explore their dependence of these “Region 3” currents on local time and other parameters.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E2)

Presentation: Self-Organized Criticality and Intermittency in the Integrated Power of High-Latitude Ionospheric Irregularities

Authors: Hossein Ghadjari, David Knudsen, Georgios Balasis
Affiliations: University Of Calgary, National Observatory of Athens
This study investigates the statistical properties of plasma density fluctuations in the auroral and polar cap regions using data from the entire Swarm mission. A key focus is to characterize the probability distribution functions of these fluctuations and extract insights into the occurrence and nature of extreme plasma density events. These events, often associated with significant ionospheric disturbances, will be analyzed to evaluate their effects on Swarm GPS receiver performance. By examining the spatial and temporal patterns of extreme events, this research aims to further our understanding of the dynamics driving extreme irregularities in the high-latitude ionosphere.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Session: A.09.01 The mountain cryosphere in peril – improved monitoring of snow and ice in complex terrain to address societal challenges in the face of climate change

The impact of climate change on the cryosphere in mountain areas is increasing, affecting billions of people living in these regions and downstream communities. The latest Intergovernmental Panel on Climate Change Assessment Report highlights the importance of monitoring these changes and assessing trends for water security, as well as the risks of geo-hazards such as GLOFs, landslides, and rockfalls.

This session will explore advanced methods and tools for monitoring physical parameters of snow, glaciers, and permafrost in mountainous regions using data from current satellites. We will also discuss the potential of upcoming satellite launched in the near future to enhance these observations and fill in any gaps. By improving our understanding of water availability in mountainous areas and identifying key risks, we can develop strategies to adapt to the changing conditions and also better protect these vulnerable regions.

We welcome contributions on advanced geophysical observations of snow, glaciers and permafrost variables in mountainous regions around the world using different satellite data and their impact on water resources and the increasing risks posed by geo-hazards under changing climate conditions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Deep learning for automated mapping of marginal snow in Sentinel-2 satellite imagery

Authors: Leam Howe, Prof Richard Essery
Affiliations: University Of Edinburgh
Mountain snow plays vital roles as a water reservoir, habitat, and recreational area, but it also poses significant risks like floods and avalanches and is highly sensitivity to climate change. The task of measuring and forecasting snow cover is complicated by the high spatial variability of mountain snow compared to the resolutions available from satellite sensors and models, especially in complex terrains where accurate data are critical. Existing remote sensing products, often developed and validated in regions with persistent seasonal snow cover, face challenges in regions with marginal or ephemeral snow. Such products can struggle as the 'reasonable' omissions/errors made in areas of abundant snow cover become significant when applied to areas with variable or fleeting snow cover. In the face of ongoing climate change, many permanent seasonal snowpacks are transitioning to marginal and ephemeral conditions. There is, therefore, a need to develop remote sensing products that perform well in these challenging regions. To address this issue, we train a U-Net-based machine learning model to map snow and cloud cover in Sentinel-2 imagery. Our model was trained on a relatively small dataset of late-lying snow cover in the Highlands of Scotland. Despite the dataset's limited size, our approach achieved a high score for overlap on our testing set and reduced the error in snow cover areal extent by an order of magnitude compared to NDSI-based methods, demonstrating high accuracy with modest computational demand (approximately 15 minutes of training time on a GPU). The results also show that our model better accommodates the diverse locations and spectral properties of snow found under Scotland's temperate maritime climate, and can accurately identify snow in challenging atmospheric conditions and cloud effects. Preliminary tests in other climatically and geographically diverse regions such as Greenland and Australia suggest that our model maintains consistent performance, though further validation is required to confirm its generalisability. This proof-of-concept establishes that machine learning, and specifically deep learning with convolutional neural networks, can capture the numerous spectral and spatial characteristics of mountain snow found in optical satellite data, and could improve projections and studies in regions experiencing transitional snow conditions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Improved monitoring of seasonal snow characteristics in mountainous terrain by means of satellite data

Authors: Gabriele Schwaizer, Thomas Nagler, Markus Hetzenecker, Ursula Fasching, Maria Heinrich, Johanna Nemec, Tanja Rahim, Andrea Scolamiero, Helmut Rott
Affiliations: ENVEO IT GmbH
Mountain regions are acting as water towers of the surrounding lowlands, supporting several millions of people downstream with fresh water. Precise mapping of snow characteristics in mountain terrain is of high relevance for supporting many different applications linked to water management, hydrology, natural hazards, and hydropower generation. The Copernicus Sentinel-1/-2/-3 satellites provide an excellent data basis for continuous monitoring of the seasonal snow from local to global scale. The complexity of mountainous terrain requires advanced methods to retrieve high quality information on the spatial distribution of the seasonal snow cover and its properties. Optical satellite data, as available from Sentinel-2 MSI and Sentinel-3 SLSTR & OLCI sensors, can be used for monitoring the seasonal snow extent at different spatial and temporal scales. To improve the snow cover classification in mountain terrain, in particularly in cast shadowed areas, an improved retrieval method based on multi-spectral unmixing with locally adaptive end-member selection has been developed. End-members of fully snow covered and snow free pixels in high Alpine terrain are selected per scene and separately for illuminated and shaded areas. Based on the multi-spectral reflectance information of the resulting four end-members, the snow cover fraction per pixel is estimated for all remaining pixels with a multi-spectral unmixing procedure. This method significantly improves the snow cover fraction estimation in cast shadow areas of mountain regions. The method is applicable to optical satellite sensors having spectral bands from the visible to the shortwave infrared range. The algorithm can consider all available reflective spectral bands for the unmixing procedure. Snow cover fraction maps generated from Sentinel-2 and Landsat data over the Alps were validated with very high resolution reference snow maps from WorldView-2/3 images, showing a bias close to 0% and an overall root mean square error of about 15%. Additionally, the algorithm was tested with data from different medium resolution optical satellite sensors, including Sentinel-3 SLSTR&OLCI, Terra MODIS, and S-NPP VIIRS. The resulting snow cover fraction maps were compared with the products from Sentinel-2 and Landsat data, providing consistent information about the snow covered areas at the different spatial and temporal resolutions. While optical satellite data enable the classification of the total snow area, Synthetic Aperture Radar (SAR) satellite data, as C-band data available from Sentinel-1, allow the identification of melting snow areas. The wet snow retrieval is based on a change detection algorithm optimized for high mountain terrain, as the occurrence of liquid water content within the snowpack reduces the backscatter signal compared to dry snow or snow free conditions. An important step for wet snow classification from SAR data is the preparation of a reference backscatter map per track. Repeat pass SAR data acquired at dry snow conditions or optionally snow free conditions are used as data base. The backscatter ratio of a SAR image acquired at melting conditions and the reference backscatter map of the same orbit build the basis to identify wet snow areas. Combining the VV and VH polarization Sentinel-1 SAR data in dependence on the local incidence angle helps to reduce impacts of the local incidence angle on the backscatter ratio and thus improves the wet snow retrieval in particular at small local incidence angles. A threshold is applied to separate wet snow from other surfaces. Dry snow and snow free areas cannot be discriminated from SAR satellite data. The comparison of the wet snow classification from SAR satellite data with snow cover fraction maps retrieved from high resolution optical satellite data during the main melting season in Alpine terrain, assuming melt conditions for the complete snowpack, resulted in an overall accuracy of more than 90% and F-scores close to 90%. To exploit the snow information from different satellite sensors over mountain terrain, the daily snow covered area from Sentinel-3 data can be combined with the snow melt extent information from Sentinel-1 data. Thus, snow free areas, melting snow areas which potentially contribute to the runoff and areas still covered by dry snow can be discriminated. High resolution snow cover extent information from cloud-free Sentinel-2 data can be used to get further details about the spatial distribution of the snow area. We will present the methods for the improved retrieval of snow characteristics in mountain terrain observable from the Copernicus Sentinel-1/-2/-3 satellites, demonstrate the improvements of each individual approach and highlight the added value by the combination of the different snow information sources for potential applications. Further, we will demonstrate the applicability of the presented methods in other selected mountain regions around the world, based on activities in support of the Common Observing Period Experiment (COPE) of the International Network for Alpine Research Catchment Hydrology (INARCH).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Snow Line Elevation Trends in the Alps, Pyrenees, and Andes Mountains, derived from 40-year Landsat snow cover time series

Authors: Andreas Dietz, Sebastian Roessler, Jonas Koehler
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD)
Climate change is affecting the snow cover in mountain regions all around the world. With temperatures increasing, snow melt starts earlier every year on both Hemispheres, leading to various effects such as changes in the runoff regime, albedo, vegetation dynamics, animal habitats, floods, and impacts on tourism and hydropower generation. Because temperatures are expected to increase even more in the upcoming years, a detailed trend analysis of past developments is desired to understand the potential effects in the future. because climate models are oftentimes too coarse to produce reliable results for the complex terrain of mountain regions, time series of high-resolution remote sensing data offer a great alternative. At the German Aerospace Center (DLR), methods to derive Snow Line Elevation (SLE) statistics based on long-term time series of Landsat data have been developed which can be utilized to derive monthly SLEs for every mountain catchment around the globe where Landsat data is available. The challenges when dealing with Landsat time series comprise aspects such as considerable data gaps caused by cloud cover, different data availability throughout the years and Landsat generations, and the generally difficult to handle conditions in steep mountain terrain. The derived SLEs can be analyzed to identify trends in autumn or spring, delineating to which extent the snow cover is retreating each year. These trends can be further analyzed to identify the trend significance, or can be used to predict the potential future SLE retreat. The developed methods have been applied to the European Alps, the Pyrenees, and some catchments in the Chilean Andes Mountains close to Santiago de Chile. The analysis of the SLEs has revealed significant trends in all three regions, with SLEs retreating up to 20 meters per year during spring. These snow cover changes can pose significant challenges to flora, fauna, and humans in the affected regions and beyond. The presentation will outline the general methodology behind the SLE retrieval, and will then focus on the trends detected within the three study regions and potential future developments that can be expected there.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Beyond snow and glaciers: Quantifying aufeis thickness in the Trans-Himalaya of Ladakh, India

Authors: Dr. Dagmar Brombierstäudl, Dr. Susanne Schmidt, Dr. Mohd Soheb, Prof. Dr. Marcus Nüsser
Affiliations: Department of Geography, South Asia Institute (SAI), Heidelberg University, Heidelberg Centre for the Environment, Heidelberg University, Heidelberg
Aufeis is often associated with permafrost and cold-arid conditions and one of the least studied component of the Trans-Himalayan cryosphere. These seasonal laminated sheet-like ice masses are formed in winter by the successive freezing of overflowing water that seeps from the ground, a spring or emerges from river ice. They are an important water sources for field irrigation and pastoral communities in Ladakh. In some villages, aufeis accumulation is enhanced in ice reservoirs (commonly known as “artificial glaciers”) since decades to store the winter baseflow for crop irrigation during the water scare period in spring. Despite this importance, research on aufeis in the region is still in its early stages. In previous studies we have mapped a total aufeis covered area of almost 400 km² across the Trans-Himalaya between 4000 and 5500m a.s.l. The number and size distribution shows a distinct increase towards the Tibetan Plateau indicating the importance of cold-arid climatic conditions for their development. The largest individual aufeis field covers an area of 14 km², which is almost triple the size of the largest high-altitude glaciers in Central Ladakh. While mapping the maximum spatial extent of aufeis thickness estimations are more challenging. In this study we demonstrate the unexplored potential of differencing digital elevation models (DEM) from very high-resolution stereo Pléiades satellite data and terrestrial photographs for aufeis studies. Therefore, we have selected four case study sites: two ice reservoirs (Igoo and Phuktse) and two catchments (Gya and Sasoma) with natural aufeis occurrence. While Pléiades data was available for all sites, terrestrial imagery was only acquired for the ice reservoirs due to the limited accessibility of the catchments of Gya and Sasoma. In total six stereo images were acquired - three during the ice-free reference period (September/October 2022) and three during aufeis-covered season (February/March 2023). Due to strict UAV regulations, 5700 terrestrial photographs were taken at five-meter intervals by walking around the slopes of the ice reservoirs during the summer and winter field surveys. Calculation of DEMs from both datasets relied on the computer vision and photogrammetric Structure-from-Motion technique which reconstructs DEMs from overlapping 2D imagery. The Pléiades DEMs were computed with the open-source NASA Ames Stereo Pipeline, DEMs from the terrestrial photographs with the commercial Agisoft Metashape software. DEM differencing revealed an ice thickness up to 2.8 m in both ice reservoirs, while natural aufeis fields occasionally even reach greater thickness over 3 m. Aufeis volumes across the four study sites range from 34,106 ± 13,440 m3 in Phuktse up to 105,790 ± 28,511 m3 in Sasoma, indicating substantial amounts of water that need to be considered in future hydrological studies. The results from very high-resolution stereo satellite imagery are promising for aufeis studies on large spatial scales. Their usage can fill an observation gap that is caused by the remoteness and inaccessibility of many aufeis-prone areas and large sizes of individual aufeis fields. Point clouds and DEMs from terrestrial photographs revealed a high level of detail that is especially useful for in-depth studies of aufeis morphology and seasonal dynamics. In the context of ice reservoirs, this could even have practical implications for the development of sustainable water management strategies. This study does not only represent the first quantification of aufeis thickness in the Trans-Himalaya, but also contributes to the ongoing scientific efforts to implement existing and well-established remote sensing methods for aufeis studies on regional and global scales. It also highlights the importance of studying this lesser-known cryosphere component to improve our understanding of mountain hydrology. It might help to shed light on factors that play a significant role in aufeis formation and persistence, like permafrost or groundwater distribution that is still unknown for most parts of the Trans-Himalaya.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Ensemble-based cryospheric reanalysis to infer global snow mass

Authors: Kristoffer Aalstad, Esteban Alonso-González, Joel Fiddes, Gregoire Guillet, Andreas Kääb, Norbert Pirk, Désirée Treichler, Stefan Wunderle, Sebastian Westermann, Yeliz Yilmaz
Affiliations: University Of Oslo, Pyrenean Institute of Ecology, SLF, University of Bern
Snow, glaciers, and permafrost are essential climate variables (ECV) that regulate the global cycles of energy, carbon, and water. At the same time, these cryospheric ECVs are only partially observable by satellites due to gaps, noise, and indirect retrieval algorithms. Data assimilation (DA), namely the Bayesian fusion of uncertain models and noisy observations, presents a natural solution to the problem of partial observability. Nonetheless, unlike in numerical weather prediction, DA has received relatively little attention from the Earth observation (EO) community despite its potential as a generalized retrieval framework that adds value by filling gaps and inferring latent ECVs with uncertainty quantification. This contribution presents active research on applying ensemble-based DA techniques to carry out cryospheric reanalysis constrained by satellite data. Our future goals with these efforts are to generate consistent global high resolution reanalyses of seasonal snow mass (also known as snow water equivalent or simply SWE), glacier mass balance, and the thermal state of permafrost. In doing so, we seek to leverage a myriad of data streams including Earth observing satellites, global atmospheric reanalyses, airborne retrievals, and in-situ observations. Here we focus on seasonal snow since accurate global snow mass estimation, particularly in mountainous terrain, remains a major unsolved problem in snow hydrology. To help tackle this problem, we are developing a global scale ensemble-based snow reanalysis product by assimilating ESA Snow_cci fractional snow-covered area retrievals. Using a simple snow model we are able to devote considerable computational resources to generate a global snow reanalysis at daily temporal resolution and kilometric spatial resolution while still being able to afford using promising iterative ensemble-based DA schemes. For the latter, we explore recent developments in hybridizing iterative ensemble Kalman and particle methods to provide robust Bayesian posterior inference. In particular, we demonstrate how these schemes can be used as nested smoothers to hierarchically infer prior hyperparameters related to snow climatology. The new reanalysis approach is evaluated using independent spaceborne, airborne, and in-situ validation data by comparing its performance to state-of-the-art regional snow reanalyses and existing global snow mass products in the form of ERA5-Land reanalysis data and ESA Snow_cci SWE retrievals. Unlike existing global snow products, this new snow reanalysis product is uncertainty-aware, assimilates snow satellite data, and specifically targets a key knowledge gap concerning mountain snow mass. Concurrent efforts to adapt this DA framework towards the generation of global ensemble-based reanalyses for glaciers and permafrost will also be showcased to emphasize synergies across these terrestrial cryospheric reanalysis efforts. We highlight that, by combining EO with cryospheric models, ensemble-based DA can transform largely untapped climate data into actionable climate information on cryospheric ECVs. The baked-in uncertainty quantification in this probabilistic climate information empowers us to make decisions in response to climate change and its perilous impacts on the mountain cryosphere.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.85/1.86)

Presentation: Using temporal interpolation on optical-derived labels improves snow detection on SAR images using deep learning method

Authors: Swann Briand, Flora Weissgerber, Sylvain Lobry, Jérôme Idier
Affiliations: DTIS, ONERA, Université Paris Saclay, SIP, LIPADE, Université Paris Cité, LS2N, CNRS
Snow detection is important in many domains as it is a key variable for climate monitoring [Aguirre2018]. It also allows us to assess available water resources for human consumption or hydroelectricity generation [Rouhier2018]. Using optical data, snow can be detected because of its high reflectance in the visible spectrum and the low reflectance in the shortwave infrared spectrum. The Normalized Differential Snow Index (NDSI) which exploits these spectral characteristics [Hall2002] is commonly used to create binary or fractional snow cover maps, but is highly sensible to cloud cover resulting in unevenly spaced time series. Synthetic Aperture Radar (SAR) data can be acquired at night and through clouds. However, snow detection with SAR is challenging, as dry snow is almost transparent to SAR and most of the observed signal comes from the ground. A method to retrieve dry snow depth using ratios between VV and VH backscatter compared to a reference created using means of snow-free acquisitions was proposed by [Lievens2019], but need prior information about snow presence. When snow melts, its liquid water content increases and most of the signal is scattered in the specular direction, strongly attenuating the backscattered signal. In [Nagler2016], the authors use this attenuation by combining ratios of both polarization backscatter with their reference to detect wet snow with a thresholding method. As it is a pixel-wise decision, it can be noisy. Deep learning methods can be used to detect wet snow with optical-derived labels in a semantic segmentation task using ratios between backscatter and reference as input as it presents the advantage of being independent on incident observation angle [Lê2023]. In [Montginoux2023], [Nagler2016] and [Lievens2019] ratios were concatenated to detect wet and dry snow and topological information was added in [Gallet2024]. These previous methods show promising results, however optical-derived label maps used for training the network are always patchy due to high cloud cover. In this study, we investigate whether increasing the amount of labels by temporal interpolation of the NDSI improves wet and dry snow detection results even if these labels are uncertain. We compare two temporal interpolation methods on NDSI: Closest Neighbours Interpolation (CNI) using a three days window and a Kalman smoother. CNI only fills small gaps taking little risk while Kalman smoother estimates a NDSI value for each date regardless of the gap size. The NDSI maps are then thresholded to get binary snow cover maps and projected on SAR geometry using LabSAR algorithm [Weissgerber2022]. To conduct this study, we first assess which set of input channels gives the best results training a network with non-interpolated labels. We consider four sets of input channels: the one used in [Montginoux2022] (A), the one used in [Lê2023] (B), a concatenation of VV and VH backscatter (C), and channel set C concatenated with their references (D). After identifying the best set, we use it to assess label interpolation effect. As machine learning methods performances are very dependent on training data, we test the robustness of our method under spatial and temporal domain shift. We use couples of SAR acquisitions and optical label maps from the Guil basin located in the Queyras massif in the French Alps from September 2018 to June 2019 as our main domain and split it temporally in a training set, a validation set to avoid overfitting and tune model hyperparameters and a test set to evaluate it. To evaluate temporal shift robustness, we use acquisitions from the same basin between September 2019 and June 2020, and for spatial shift the Gyronde basin in the Ecrins massif between September 2018 and June 2019. The Sentinel-1 data is acquired in interferometric wide swath mode, with a range-azimuth ground resolution of 5x20 m and a temporal resolution of 6 days. Three orbits go over each basin, so we have 6 acquisitions every 12 days combining both ascending and descending orbits. Reference images are computed for each year, basin and orbit using snow-free acquisitions between the month of June and August of the respective year. The optical data is from the MOD10A1 dataset [Hall2021] from the National Snow and Ice Data Center (NSIDC), which provides daily NDSI and cloud cover maps for both basins at 500m ground resolution. When investigating channel sets, each lends accuracies over 0.85 without domain transfer with D performing best with 0.899 accuracy. With temporal transfer, performances do not change much as we have more mono class dates which are easy to segment and D remains the best channel set. Spatial transfer is a harder task but D still is the best channel set with an accuracy of 0.866. With qualitative evaluation on predicted maps, we see that models trained with channel set C can miss snow as reference information about snow-free ground is needed. Model trained with channel set D always predicts better maps than those trained with channel set A and B, which are less precise and noisier. Keeping the reference as an independent channel using channel set D allows better segmentation, as using a ratio between backscatter and reference removes incident angle variability thus topography information. For the rest of the study, we use channel set D as input, and trained models using CNI and Kalman smoother with different regularization parameter values. All the interpolation methods improve from using non-interpolated labels, and CNI performs best with accuracies over 0.9 for all domains. Qualitatively, we see a clear improvement on the predicted snow maps which are smoother due to better spatial regularization learned during training, where the network sees less patchy label maps. Using any label interpolation is better than none, but the Kalman smoother performs worse than CNI. While we get more labels than using CNI, it increases the risk of introducing label noise by misclassifying more frequently a pixel by filling all the gaps in its timeseries. To improve our method, we can use the estimation variance the smoother outputs to model the confidence in a label and use this information during training. References: [Aguirre2018] F. Aguirre et al., « Snow Cover Change as a Climate Indicator in Brunswick Peninsula, Patagonia », Front. Earth Sci., vol. 6, sept. 2018, doi: 10.3389/feart.2018.00130. [Gallet2024] M. Gallet, A. Atto, F. Karbou, et E. Trouvé, « Wet Snow Detection From Satellite SAR Images by Machine Learning With Physical Snowpack Model Labeling », IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 17, p. 2901 2917, 2024, doi: 10.1109/JSTARS.2023.3342990. [Hall2021] D. K. Hall et G. A. Riggs, « MODIS/Terra Snow Cover Daily L3 Global 500m SIN Grid, Version 61 ». NASA National Snow and Ice Data Center Distributed Active Archive Center, 2021. doi: 10.5067/MODIS/MOD10A1.061. [Hall1995] D. K. Hall, G. A. Riggs, et V. V. Salomonson, « Development of methods for mapping global snow cover using moderate resolution imaging spectroradiometer data », Remote Sensing of Environment, vol. 54, no 2, p. 127 140, nov. 1995, doi: 10.1016/0034-4257(95)00137-P. [Lê2023] T. T. Lê, A. Atto, E. Trouvé, et F. Karbou, « Deep Semantic Fusion of Sentinel-1 and Sentinel-2 Snow Products for Snow Monitoring in Mountainous Regions », in IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA: IEEE, juill. 2023, p. 6286 6289. doi: 10.1109/IGARSS52108.2023.10282065. [Lievens2019] H. Lievens et al., « Snow depth variability in the Northern Hemisphere mountains observed from space », Nat Commun, vol. 10, no 1, p. 4629, oct. 2019, doi: 10.1038/s41467-019-12566-y. [Montginoux2023] M. Montginoux, F. Weissgerber, S. Lobry, et J. Idier, « Évaluation du couvert neigeux à partir d’images SAR par apprentissage profond basé sur des images optiques de référence », in 29e colloque GRETSI, Grenoble (38000), France, août 2023. Consulté le: 18 septembre 2024. [En ligne]. Disponible sur: https://hal.science/hal-04256105 [Nagler2016] T. Nagler, H. Rott, E. Ripper, G. Bippus, et M. Hetzenecker, « Advancements for Snowmelt Monitoring by Means of Sentinel-1 SAR », Remote Sensing, vol. 8, no 4, Art. no 4, avr. 2016, doi: 10.3390/rs8040348. [Rouhier2018] L. Rouhier, « Régionalisation d’un modèle hydrologique distribué pour la modélisation de bassins non jaugés. Application aux vallées de la Loire et de la Durance », phdthesis, Sorbonne Université, 2018. Consulté le: 28 novembre 2024. [En ligne]. Disponible sur: https://theses.hal.science/tel-02409965 [Weissgerber 2022] F. Weissgerber, L. Charrier, C. Thomas, J.-M. Nicolas, et E. Trouvé, « LabSAR, a one-GCP coregistration tool for SAR–InSAR local analysis in high-mountain regions », Front. Remote Sens., vol. 3, p. 935137, sept. 2022, doi: 10.3389/frsen.2022.935137.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.34)

Session: F.02.13 International Cooperation in Spaceborne Imaging Spectroscopy

Building on the outcomes of the 3rd Workshop on International Cooperation in Spaceborne Imaging Spectroscopy (WICSIS-2024; https://hyperspectral2024.esa.int/), held at ESA-ESTEC (Netherlands) on 13-15 November 2024, this insight session will continue to explore opportunities and challenges for international collaboration in this relevant field.
Imaging spectroscopy from space in the visible-to-shortwave-infrared has emerged as a powerful tool for monitoring the Earth system surface. In the last years, the availability of high spatial resolution (i.e. ~30 m pixel size) imaging spectroscopy data from space accessible to users for scientific or commercial purposes has tremendously increased thanks to the successful deployment of PRISMA (ASI), DESIS (DLR), HISUI (METI), EnMAP (DLR) and EMIT (NASA/JPL), paving the way for the development of future missions such as PRISMA Second Generation (ASI), SBG (NASA/JPL) and CHIME (ESA/EC). The exploitation of these growing data streams creates immense opportunities for scientific and operational users and stakeholders. However, to fully meet the growing demands for higher and higher temporal frequency of observations, and to bridge the gap in spatial resolution with multi-spectral products, a combination of data from different missions, and the integration of growing constellations of commercial satellites will be necessary.
This session aims to bring together key stakeholders from government agencies, research institutions, and industry to discuss the latest advancements, challenges, and opportunities in spaceborne imaging spectroscopy, with a focus on medium/high spatial resolution VSWIR products and the activities carried out within the CHIME-SBG cooperation activities. Topics will include development instrument-agnostic algorithms and interoperable products, validation of global products and open science approaches. By facilitating open dialogue and exchange of ideas, we aspire to build stronger partnerships and lay the groundwork for even stronger future collaboration among Agencies and interactions with the user community.

Presentations and speakers:


Instrument-Agnostic Science: International Cooperation with the SBG-VSWIR Mission


  • David R. Thompson - NASA/JPL

International scenario on hyperspectral missions: maximizing users' benefits


  • Simona Zoffoli - ASI

Equality in imaging spectroscopy missions: needs and perspectives


  • Monica Pepe - CNR

Example of applications using time-series from spaceborne imaging spectrometers


  • Sabine Chabrillat - GFZ

EnMAP synergies with hyperspectral missions and international campaigns


  • Vera Krieger - DLR
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - User Innovations in Action

This session highlights the scientific work of experienced users leveraging the Copernicus Data Space Ecosystem to address pressing environmental and societal challenges. Champion users from diverse fields will share their work, methodologies, and findings, illustrating how Copernicus data is being applied to advance knowledge in different areas. Attendees will gain insight into applied case studies and innovative uses of the platform, fostering a deeper understanding of the ecosystem’s capabilities and its role in supporting impactful scientific work and operational services. This session will be driven by an open call to allow users submit their Unique User Story, which will be presented by sharing specific examples of how these user insights have led to impactful improvements and innovations. Join us to explore how user-driven contributions are making the Copernicus Data Space Ecosystem more accessible, responsive, and effective for diverse applications across sectors.

Presentations and speakers:


Use of Copernicus Data Space Ecosystem Data and Services in the Common Agricultural Policy Paying Agency of Castile and Leon


  • Alberto Gutierrez García – Instituto Tecnológico Agrario de Castilla y León

ESA WorldCereal: Effortless Crop Mapping with OpenEO and CDSE


  • Kristof Van Tricht and Jeroen Degerickx - VITO Remote Sensing

CDSE and Euro Data Cube


  • Gunnar Brandt - Brockmann Consult

The Space Planter Dashboard - Earth observation data in support of agriculture


  • Kostas Gružas, Ričardas Mikelionis, and Marius Survila - Statistics Lithuania, Eurostat Hackaton team

Interactive panel session


  • CDSE Team and User Community
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 1

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Global Environmental Drivers of 3D Structural Biodiversity Traits

Authors: Atticus Stovall, Dr Shukhrat Shokirov, Dr Xin Xu, Dr John Armston, Dr Lisa Patrick Bentley, Kim Calders, Professor Mathias Disney, Dr Lola Fatoyinbo
Affiliations: Nasa Goddard Space Flight Center, University of Maryland, TIIAME National Research University, Sonoma State University, Ghent University, University College London
Conservation of forest biodiversity at a global scale is directly dependent on understanding the factors influencing habitat structure. Yet, the standard metrics for assessing biodiversity (Essential Biodiversity Variables) do not capture 3D ecosystems complexity and are constrained to simplistic measures of ecosystem structure (e.g. canopy cover or tree height). Understanding the factors influencing more complex tree architectural traits in forests will support mapping and monitoring of forest biodiversity and the effectiveness of conservation efforts. Here, we present recent findings from a community-built global database containing thousands of ground-based laser scanning plots (the Global Terrestrial Laser Scanning Database; GTLS) from which we derive tree-level and plot-level architectural traits important for biodiversity, or structural biodiversity traits (SBTs), across environmental gradients. The ultimate aim of the GTLS database is to address a clear lack of 3D tree-level trait data at a global scale. We now have an improved automatic trait extraction pipeline enabling tree extraction and modeling for thousands of trees per study site, providing a standardized, quality controlled, open-source method that can be implemented across the scientific community. Currently, our database has ~20,000 3D trees with more than 10 SBTs per tree, focused on characterizing the structural signature of forest biodiversity. We will provide the newest results from an extensive laser scanning field campaign in South Africa, highlighting some preliminary trends in convergent and divergent allometric scaling relationships in dry forest ecosystems around the globe. In addition, we will discuss recently funded work that will dramatically improve our regional sensitivity to drivers of 3D biodiversity traits in Mediterranean forests. The focus on environmental drivers of 3D biodiversity traits will enable us to further understand future climate impacts on forest ecosystem biodiversity. The Global TLS Database is becoming a critical means of improving our fundamental understanding of drivers of tree-level architecture and forest biodiversity, while directly supporting conservation efforts. With broad community support for the GTLS database, we aim to directly inform EO observations and mapping of global forest biodiversity traits.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Spaceborne and In-Situ Remote Sensing for Monitoring Enhanced Forest Structural Complexity Promoting Biodiversity in Central European Forests

Authors: Patrick Kacic, Dr. Ursula Gessner, Dr. Christopher R. Hakkenberg, Stefanie Holzwarth, Prof. Dr. Jörg Müller, Dr. Kerstin Pierick, Prof. Dr. Dominik Seidel, Dr. Frank Thonfeld, Dr. Michele Torresani, Claudia Kuenzer
Affiliations: University of Würzburg, Institute of Geography and Geology, Department of Remote Sensing, German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), School of Informatics, Computing & Cyber Systems, Northern Arizona University, Field Station Fabrikschleichach, Biocenter, Department of Animal Ecology and Tropical Biology, University of Würzburg, Bavarian Forest National Park, Department for Spatial Structures and Digitization of Forests, Faculty of Forest Sciences, Georg-August-Universität Göttingen, Department for Silviculture and Forest Ecology of the Temperate Zones, Faculty of Forest Sciences, Georg-August-Universität Göttingen, Free University of Bolzano/Bozen, Faculty of Agricultural, Environmental and Food Sciences
Enhancing the structural complexity of forests has been identified as a key management technique to increase biodiversity, support multifunctionality and strengthen the resilience towards disturbances. In the context of the interdisciplinary research project BETA-FOR, experimental silvicultural treatments with increased diversity of light structures (distributed and aggregated cuttings) and deadwood features (no deadwood, downed and standing structures, habitat trees) have been implemented in central European broad-leaved forests. The standardized treatments mimicking old-growth structures and accelerating their development, enable a novel understanding of human-forest interactions, i.e. how monitoring of forest management towards structural complexity can be implemented. For continuous, cost-effective, and across-scale monitoring of forest structure, remote sensing offers complementary perspectives to local measurements. In the present study, multi-source remote sensing analyses comprising in-situ (mobile and terrestrial laser scanning) and spaceborne data (Sentinel-1; Sentinel-2; Global Ecosystem Dynamics Investigation, GEDI) were conducted to investigate enhanced forest structural complexity in BETA-FOR treatments. More precisely, the change patterns in forest structural complexity due to the treatment implementation of experimental silvicultural treatments were characterized based on satellite time-series. Bayesian time-series analyses (BEAST, Bayesian Estimator of Abrupt change, Seasonal change, and Trend) of Sentinel-1 and Sentinel-2 metrics (combination of spectral indices and spatial statistics) demonstrate the identification of enhanced structural complexity in aggregated treatments comprising no or downed deadwood structures (stumps, logs, crowns), as well as standing deadwood structures (snags, habitat trees). Furthermore, we integrated in-situ measurements from mobile and terrestrial laser scanning to assess relationships among spaceborne and in-situ indicators of forest structural complexity. We found strong correlations among in-situ and spaceborne data on structural complexity after carrying out different analyses (bi- and multi-variate correlations, unsupervised clustering). Our findings demonstrate the great potential of multi-source remote sensing to monitor forest structure along different gradients (light conditions, deadwood structures) of enhanced structural complexity. We identified several indicators of forest structural complexity from spaceborne remote sensing that accurately bridge in-situ remote sensing measurements. Those forest structural complexity indicators have the potential to guide adaptive forest management towards structural complexity according to the EU Biodiversity Strategy for 2030 from local (in-situ) to global (spaceborne) observations.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Mapping individual tree species using high-resolution sensors and deep learning

Authors: Daniel Ortiz-Gonzalo, Dr. Dimitri Gominski, Dr. Martin Brandt, Prof. Dr. Rasmus
Affiliations: University Of Copenhagen
Accurate classification of tree species from remote sensing represents a major advancement in ecological monitoring. Traditional large-scale mapping efforts have primarily relied on species distribution models and pixel-based analyses, which are often constrained by the lack of ground-truth data and the limitations of coarse-resolution imagery. These approaches struggle to capture critical structural details—such as canopy edges and other nuanced variations in visual traits— that are essential for precise tree species identification. The integration of high-resolution, multi-sensor remote sensing with advanced deep learning techniques for extracting high-level semantic information provides tailored solutions to these challenges, enabling a more accurate mapping of tree species across diverse ecosystems and land uses. In this study, we develop a tree species classifier at the individual tree level by integrating high-resolution aerial imagery, airborne LiDAR, and National Forest Inventory (NFI) data from Spain. Specifically, we use 25-cm resolution orthophotos from the Spanish National Plan of Aerial Orthophotography (PNOA) and airborne LiDAR data with a density of 3-5 points per square meter. A key challenge lies in the misalignment between tree positions recorded in the NFI data and the visual features in aerial imagery, complicating direct object detection training. To address this, we decouple the detection and classification tasks: detection models are trained using labeled data from other countries, while the Spanish dataset is dedicated to species classification. To enhance the accuracy of tree matching, we incorporate NFI-derived traits such as tree height and align them with canopy height models generated from LiDAR data. Additionally, we leverage recent advances in deep semi-supervised learning to enhance species recognition, reducing the reliance on extensive labeled data and ensuring scalability and efficiency. Our individual tree-level approach outperforms traditional pixel and patch-level analyses in estimating tree diversity indices. Metrics such as species richness, Shannon index, Simpson index, and Pielou’s Evenness are captured more accurately across diverse ecosystems and land use systems, including forests, agriculture, and urban areas. This study lays the groundwork for a national tree species map at the individual tree level, offering an unprecedented level of detail for monitoring tree diversity and advancing ecological research.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: A Dataset on the Structural Diversity of European Forests

Authors: Gonzalo Oton Azofra, Marco Girardello, Matteo Piccardo, Mark Pickering, Agata Elia, Guido Ceccherini, Mariano Garcia, Mirco Migliavacca, Alessandro Cescatti
Affiliations: European Commission, Joint Research Centre (JRC), The University of Dublin, Trinity College Dublin, Department of Geography, Consultant of European Commission, Joint Research Centre, European Space Research Institute, ESA-ESRIN, Universidad de Alcala, Department of Geology, Geography and the Environment, Environmental Remote Sensing Research Group
Forest structural diversity, defined as the heterogeneity of canopy structural elements in space, is an important axis of functional diversity and is central to understanding the relationship between canopy structure, biodiversity, and ecosystem functioning. Despite the recognised importance of forest structural diversity, the development of specific data products has been hindered by the challenges associated with collecting information on forest structure over large spatial scales. However, the advent of novel spaceborne LiDAR sensors like the Global Ecosystem Dynamics Investigation (GEDI) is now revolutionising the assessment of forest structural diversity by providing high-quality information on forest structural parameters with a quasi-global coverage. Whilst the availability of GEDI data and the computational capacity to handle large datasets have opened up new opportunities for mapping structural diversity, GEDI only collects sparse measurements of vegetation structure. Continuous information of forest structural diversity over large spatial domains may be needed for a variety of applications. The aim of this study was to create wall-to-wall maps of canopy structural diversity in European forests using a predictive modelling framework based on machine learning. We leverage multispectral and Synthetic Aperture Radar (SAR) data to create a series of input features that were related to eight different structural diversity metrics, calculated using GEDI. The models proved to be robust, indicating that active radar and passive optical data can effectively be used to predict structural diversity. Our dataset finds applications in a range of disciplines, including ecology, hydrology, and climate science. As our models can be regularly rerun as new images become available, it can be used to monitor the impacts of climate change and land use management on forest structural diversity. In conclusion, we generated a spatially-explicit dataset on eight forest structural diversity metrics at multiple resolutions (10 km, 5 km, 1 km) encompassing temperate, Mediterranean, and continental regions of Europe.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Biodiversity from Space: Understanding Large-Scale Patterns of Ecosystem Structure and Diversity with Remote Sensing

Authors: Fabian D Schneider, Ryan P Pavlick, Ting Zheng, Antonio Ferraz, Natalie Queally, Ethan Shafron, Morgan Dean, Laura Berman, Zhiwei Ye, Giulia Tagliabue, Philip A Townsend
Affiliations: Aarhus University, NASA Jet Propulsion Laboratory, California Institute of Technology, NASA, University of Wisconsin-Madison, University of Montana, University of California Los Angeles, University of Milano-Bicocca
Biodiversity is under pressure from anthropogenic and climate change, yet monitoring and predicting these changes globally is challenging due to knowledge gaps in biodiversity’s spatial and temporal dynamics. New remote sensing instruments offer large-scale measurements of plant canopy structure, functional traits, and ecosystem functioning from space. For example, spaceborne lidar, like the GEDI instrument, provides detailed views of plant canopy structure and diversity across landscapes. I will present results and challenges from mapping forest structural diversity in California and Central Africa using GEDI, at scales from 1 to 25 km, covering Mediterranean and tropical forests. We found GEDI’s RH98, Cover, and FHD metrics were most effective for capturing canopy height, density, and layering. GEDI generally captured canopy structure well in closed forests on flat terrain, though challenges arose in open forests and complex terrain. We identified high structural diversity in mid-elevation and coastal forests in the US and in volcanic ranges and forest-savanna transitions in Africa. GEDI revealed patterns of structural diversity that aligned with ecological processes, including the influence of wildfire in the US and topographic variation in Africa. In addition to ecosystem structure, we developed methods using imaging spectroscopy to map leaf biochemical and biophysical traits, revealing patterns of plant functional diversity. Testing with airborne data from AVIRIS Classic across the Sierra Nevada mountains, we assessed the potential of spaceborne instruments like EnMAP, PRISMA, and future NASA SBG and ESA CHIME missions. I will present results that give insights into mapping foliar traits at large spatial scale and the role of trait-trait relationships in mapping plant functional diversity. We found that there are at least three relevant functional axes of variation that should be represented in functional diversity analyses, and that the relationship among those axes and functional plant strategies is context dependent. We also found that patterns of functional diversity were related to elevation gradients and disturbance patterns, especially related to wildfire. Combining these new measurements with ground-based data will help to better understand biodiversity patterns and change over time. I will present examples of new analyses of remotely sensed patterns of plant functional and structural diversity, and their relationship to other dimensions of biodiversity and ecosystem functions, that demonstrate the value and potential of new remote sensing instruments and methods for biodiversity monitoring from space.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.96/0.97)

Presentation: Benchmarking plant functional diversity estimation from space with a Biodiversity Observing System Simulation Experiment

Authors: Javier Pacheco-Labrador, Ulisse Gomarasca, Daniel E. Pabon-Moreno, Wantong Li, Martin Jung, Dr Gregory Duveiller
Affiliations: Spanish National Research Council, Max Planck Institute for Biogeochemistry
As global and regional vegetation diversity loss threatens essential ecosystem services under climate change, monitoring biodiversity dynamics is crucial in evaluating its role and providing insights into climate adaptation and mitigation. However, biodiversity monitoring is resource-intensive and unable to provide the coverage and resolution necessary to understand biodiversity responses to environmental changes. In this context, remote sensing (RS) has arisen as a potential opportunity to assess long-term and large-scale biodiversity dynamics. However, the results of the literature are contrasting and reveal a strong effect of spatial resolution on the estimation of different vegetation diversity. Filling the existing methodological gaps suffers from the lack of ad hoc, consistent, global, and spatially matched ground diversity measurements that enable testing and validating generalizable methodologies. To address this problem, we have developed the Biodiversity Observing System Simulation Experiment (BOSSE). BOSSE simulates synthetic landscapes featuring communities of various vegetation species, the seasonality of the vegetation traits in response to meteorology and environmental factors, and the corresponding remote sensing imagery linked to the traits via radiative transfer theory. Thereby, BOSSE enables users to evaluate the capability of different methods to estimate plant functional diversity (PFD) from RS. BOSSE simulates hyperspectral reflectance factors (R), sun-induced chlorophyll fluorescence (SIF), and land surface temperature (LST). The simulated images can be further convolved to the bands of specific RS missions. In this work, we use BOSSE to answer five methodological questions regarding the quantification of PFD with RS. We found BOSSE a valuable tool for evaluating different methods and shedding light on the best approaches and the limitations of RS to infer PFD. In particular, we learned that: 1) at the landscape scale, diversity indices should be computed over small windows and averaged rather than using large windows; 2) leaf area index (LAI) is a better proxy of species abundance than the surface covered by each species; 3) optical traits (traits estimated form RS) and hyperspectral reflectance and are likely the best estimators for PFD; 4) PFD estimation uncertainty maximizes at the phenological minimum (low LAI values); and 5) PFD estimation is strongly affected when RS pixels combine signals from different species, but correlations with PFD are robust if field data are gridded to the pixel size as long as pixels are less than three times larger than plants. In summary, we prove BOSSE is a valuable tool for testing novel methods regarding RS monitoring of plant diversity, facilitating advances in this new area of research.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.31/1.32)

Session: C.01.17 Creating the Perfect Bouquet of Innovation: Designing the Next EO Technology Demonstration Mission - Part 2

What are the next generation of promising technologies to be demonstrated in space? Are you working on new technologies for space demonstration? How can these developments contribute to strengthening Earth Observation efforts worldwide?

This session is designed to gather ideas for potential technology demonstration missions that could be developed within three years, with an estimated launch in 2030. The session will include a series of activities combining individual and group efforts, applying a design-thinking approach and creative facilitation methods to foster unconventional ideas and maximize innovation.
The goal is to collect a broad range of ideas and refine them into realistic, feasible mission concepts within the given timeline.

What happens after?
The top ideas will be presented on Friday, 27th June, and reviewed by a panel of ESA experts.

Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F1)

Session: C.03.13 Sentinel-1C Preliminary User Assessment: Early Insights and Feedback from the Community

The Sentinel-1C spacecraft is scheduled for launch in December 2024. Considering the context on which the Sentinel-1 mission is running, ESA intends (if spacecraft commissioning allows) to increase the sensing capacity beyond the commissioning needs and to release the data to the users before the end In-Orbit Commissioning Phase planned for May 2025.

At the time of the LPS 2025 symposium, users will have access to 3 months of pre-qualified Sentinel-1C. This session will provide an early evaluation of its usability, performance, and added value as experienced by the user community.

Following the conclusion of its in-orbit commissioning (IOC) phase in late May 2025, the mission’s new capabilities and datasets will be assessed by initial users from various application domains, offering valuable insights into its impact on operational and scientific workflows.

This session will highlight the feedback and experiences of pioneering users who have accessed and utilized Sentinel-1C data in the months following its release. Presentations will address key aspects of the mission, including:

- Data Quality and Continuity: Initial observations on the consistency and reliability of Sentinel-1C data compared to earlier mission units, with a focus on calibration, noise characteristics, and cross-mission compatibility.

- Operational Integration: Insights from early adopters on integrating Sentinel-1C into existing processing pipelines, highlighting challenges, lessons learned, and potential improvements.

- Preliminary Use Cases: Demonstrations of how Sentinel-1C data is being applied in fields such as disaster response, agriculture, forest monitoring, urban analysis, and climate studies.

The session will provide a forum for the Earth observation community to share preliminary experiences with Sentinel-1C, identify early successes, and discuss the challenges associated with onboarding a new spacecraft unit within the Sentinel-1 constellation.

Presentations and speakers:


Preliminary AIS-fused satellite ship detection capabilities by Sentinel-1C


  • Carl Torbjorn Stahl - EGEOS

On the validation and assimilation of Sentinel-1C wave data in operational wave model MFWAM, Lotfi


  • Lotfi Aouf - Meteo-France

Early results of Sentinel-1C one day radar interferometry for grounding line delineation in polar ice


  • Eric Rignot - Univ. California Irvine

Sentinel 1C boosting Near Real Time Ice Products


  • Keld Quistgaard - DMI

Early data uptake in the agriculture, forestry and Ukraine war contex


  • Guido Lemoine - JRC
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Session: A.08.07 Ocean Health including marine and coastal biodiversity - PART 2

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: CIAO: A Machine-Learning Algorithm for Mapping Arctic Ocean Chlorophyll-a from Space

Authors: Maria Laura Zoffoli, Vittorio Brando, Gianluca Volpe, Luis González Vilas, Bede Ffinian Rowe Davies, Robert Frouin, Jaime Pitarch, Simon Oiry, Jing Tan, Dr Simone Colella, Christian Marchese
Affiliations: Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (CNR-ISMAR), 00133, Institut des Substances et Organismes de la Mer, ISOMer, Nantes Universite, UR 2160, F-44000, 3Scripps Institution of Oceanography, University California San Diego, La Jolla
The Arctic Ocean (AO) is warming faster than any other region on Earth, influencing phytoplankton communities and potentially triggering cascading effects throughout the marine trophic web, with global climate repercussions. Despite its critical importance, limited sampling in this vast and challenging oceanic region has hindered understanding of these changes. Ocean color (OC) remote sensing, with over 26 years of continuous daily acquisitions, is a crucial tool to bridge these knowledge gaps, offering insights into long-term trends and seasonal variability in phytoplankton abundance, as indexed by Chlorophyll-a concentration (Chl), at a Pan-Arctic scale. However, current algorithms for retrieving Chl from satellite data in the AO have shown significant limitations, including high levels of uncertainty and inconsistent accuracy across different regions. These inaccuracies in Chl retrievals propagated further, affecting primary production estimates, climate and biogeochemical modeling. In this study, we quantified uncertainties of seven existing algorithms using harmonized, merged multi-sensor satellite remote sensing reflectance (Rrs) data from the ESA Climate Change Initiative (CCI) spanning 1998–2023. These estimations provide environmental modelers with more effective tools for understanding and managing the propagation of uncertainties. The existing algorithms exhibited varying performance, with Mean Absolute Differences (MAD) ranging from 0.756 to 4.209 mg m-3. To improve upon these results, we developed CIAO (Chlorophyll In the Arctic Ocean), a machine learning-based algorithm specifically designed for AO waters and trained with satellite Rrs data. The CIAO algorithm uses Rrs at four spectral bands (443, 490, 510 and 560 nm) and Day-Of-Year (DOY) to account for seasonal variations in the bio-optical relationships. CIAO significantly outperformed the seven existing models, achieving a MAD of 0.519 mg m-3, thereby improving Chl retrievals by at least 30%, compared to the best-performing existing algorithm. Furthermore, CIAO produced consistent spatial patterns and provided more reliable Chl estimates in coastal waters, where other algorithms tend to overestimate. This enhanced accuracy offers improved accuracy in tracking seasonal variability at the Pan-Arctic scale. By improving the precision of satellite-derived Chl data, the CIAO algorithm enables more accurate assessments of the ecological impacts of climate change in the AO, contributing to more robust ecological and climate projections.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Advancing Ecosystem-Based Management With Satellite-Based Habitat Mapping and Transfer Learning: Insights From the Horizon Europe EFFECTIVE Project

Authors: Gyde Krüger, Lisbeth Tangaa Nielsen, Marie Lund Larsen, Silvia Huber
Affiliations: DHI A/S
Ocean health faces critical challenges due to pollution, habitat destruction, and climate change. The EU-funded EFFECTIVE-project (https://effective-euproject.eu/) has the primary objective of Enhancing social well-being and economic prosperity by reinforcing the eFFECTIVEness of protection and restoration management in Mediterranean Marine Protected Areas. The four-year project aims to develop a comprehensive scientific knowledge base and practical guidelines combining science, technological nature-based solutions, digitalisation and social impacts for the application of ecosystem-based management to promote large-scale marine protected areas establishment in the European seas. One aspect of the project focuses on satellite-based habitat mapping of shallow coastal areas. These areas, including coral reefs and seagrass meadows, provide numerous benefits to local communities and the global environment, including storm protection, food security, water quality regulation, recreation and supporting rich biodiversity. Protecting and restoring these ecosystems is essential for combating climate change and ensuring healthy coastal environments. Satellite-based habitat mapping is relevant because it provides comprehensive, regular information that is crucial for monitoring and managing marine ecosystems cost-efficiently at large spatial scale, ensuring accurate assessments and informed decision-making for conservation efforts. While advanced machine learning (ML) methods are increasingly used for satellite-based habitat mapping, the diversity and complexity of these ecosystems challenge the performance of generic models for large scale applications across Europe. With experience gained in Scandinavia (1) and Southeast Asia (2), we have further enhanced our approach and developed a transfer learning method for efficient scaling of the satellite-based habitat mapping. A Convolutional Neural Network was trained on multi-temporal optical satellite imagery and metocean data for four pilot sites in the Mediterranean and then applied on 10-meter Copernicus Sentinel-2 imagery to map critical shallow habitats across the Mediterranean. The developed approach provides a cost-effective tool for regular monitoring of these critical ecosystems. In this contribution, we will briefly introduce the EFFECTIVE-project and present our marine habitat mapping approach, highlighting ML model development and results, share lessons learned, and provide an outlook on future developments and next steps of the activity. 1) https://setac.onlinelibrary.wiley.com/doi/10.1002/ieam.4493 2) https://oceaninnovationchallenge.org/cohort-3-mpas-area-based-management-and-blue-economy#cbp=/ocean-innovations/mapping-and-monitoring-ecosystems-scale-copernicus-sentinel-2-imagery-tropical
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Improving the Prediction of Ocean Ecosystem Indicators by Assimilation of Satellite Observations Into a Biogeochemical Model

Authors: Lars Nerger, Sophie Vliegen, Yuchen Sun, Anju Sathyanarayanan
Affiliations: Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research
To improve the prediction of ocean ecosystem indicators related to the biogeochemistry and nutrients, satellite observations of sea surface temperature and chlorophyll are assimilated into an ocean biogeochemical model. We focus on the North Sea and Baltic Sea, utilizing the operational model system of the Monitoring and Forecasting Center for the Baltic Sea of the Copernicus Marine Service (CMEMS) which consists of the ocean model NEMO coupled to the biogeochemical model ERGOM running at a resolution of 1.8km. To incorporate observational data, the model is coupled to data assimilation functionality provided by the parallel data assimilation framework (PDAF, https://pdaf.awi.de). We leverage ensemble data assimilation in which the uncertainty of the model state is estimated by a dynamic ensemble of 30 model state realizations. The uncertainty in the biogeochemical fields is represented utilizing perturbed uncertain process parameters. The satellite observations of sea surface temperature and chlorophyll from Sentinel satellites, provided via CMEMS, are assimilated daily. The data assimilation lets the model learn from the satellite data and directly improves predictions of both observed fields, temperature and chlorophyll. It also influences the other model variables and ecosystem indicators, which are less easily validated due to limited independent observations. The assimilation also reduces the uncertainties of the indicators as estimated by the spread of the ensemble of model states. We assess the impact of the assimilation on the forecast skill with a focus on the biogeochemical variables. For chlorophyll we find an improved forecast skill for up to 14 days, which also relates to an improved representation of the phytoplankton community simulated by ERGOM. In addition, ecosystem indicators, like tropic efficiency, pH, phytoplankton community structure, and oxygen are analyzed. Here particular changes are visible in the plankton community structure and the relative abundance of zooplankton, i.e. trophic efficiency. In addition, effects on the oxygen and nutrient concentrations are visible. Apart from the scientific results, the program code for the assimilation into the NEMO model, as well as NEMO and PDAF are available as open source software (https://pdaf.awi.de provides links to PDAF and the NEMO-PDAF code). This supports possibilities for further applications and cooperation as well as operationalization.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Towards Operational Monitoring Of Shallow Marine Habitats - Integrating Remote Sensing Into The Danish National Monitoring Program

Authors: Silvia Huber, Rasmus Fenger-Nielsen, Lisbeth Tangaa Nielsen, Troels Lange, Lars Boye Hansen
Affiliations: DHI A/S, The Danish Agency for Green Transition and Aquatic Environment
Shallow marine habitats, like seagrass meadows and rockweed beds, are vital for supporting biodiversity, controlling erosion, enhancing disaster resilience, and offering habitat and food for a variety of marine species. With increasing climate pressures and human impacts related to eutrophication, overfishing and habitat fragmentation, the coverage and health of coastal habitats have rapidly declined. For example, seagrasses alone are being lost at a rate of 1.5% per year and have already lost about 30% of their estimated historical global coverage (https://www.thebluecarboninitiative.org/). A cornerstone for effective management and conservation is access to accurate and timely information about the status and trends of shallow marine habitats. Since the late 1980s, the Danish national marine environmental monitoring has been based on manual monitoring techniques, with measurements taken from ships and by divers. This monitoring includes several hundred transects, mapping and annually reporting the occurrence, density, and depth distribution of primarily seagrasses and macroalgae. Seagrass is one of the key parameters, as its depth distribution is a primary indicator of ecological status. For macroalgae, new indicators reflecting species richness and changes in the accumulated coverage with depth are being further developed based on evaluation at the EU level. These environmental indicators currently assess the ecological status of marine flora and do comply with EU’s Water Framework Directive (WFD) requirements. However, despite the high costs of the current monitoring program, it does not necessarily provide an accurate representation of the environmental state of the shallow coastal zone, as the monitoring is carried out with relatively low temporal and spatial coverage and without assessment of the areal distribution of submerged vegetation. Therefore, the Danish Agency for Green Transition and Aquatic Environment has been working to develop a cost-effective and scalable approach to collecting marine environmental data that supports the monitoring program, incorporating remote sensing technology (airborne and spaceborne) for Danish coastal waters. With the Agency's support, DHI has been developing a cloud-based, digital platform to regularly map the distribution of submerged aquatic vegetation (SAV) at nation-wide scale, using systematic Copernicus Sentinel-2 satellite data. We are currently enhancing this system with deep learning and time-series analyses for robust, operational SAV mapping, aiming for integration into the Agency’s ongoing operational monitoring program by the end of 2026. Annual calculations of the areal distribution of SAV will support the development of an area-based indicator. Alongside the indicator for seagrass depth distribution, this will help assess the condition of coastal waters according to the WFD, the EU Habitats Directive (HD), and the EU Nature Restoration Law (part of EU Biodiversity Strategy), committing member states revitalise at least 20 percent of their land and sea areas by 2030. All this will support the Agency’s ongoing administration of Danish marine water areas to meet conservation and environmental objectives and safeguard our coastal ecosystems which are key for both climate mitigation and adaptation strategies. In our presentation, we will showcase the cloud-based digital platform for operational mapping of marine underwater vegetation using remote sensing technology. We will discuss the implemented methods and how they support Denmark's marine environmental monitoring program, particularly through the development of an area-based indicator for status reporting.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: The Wadden Sea - the detection of seagrass & co. in a changing evironment

Authors: Kerstin Stelzer, Dr Marcel König, Dr Jörn Kohlus
Affiliations: Brockmann Consult GmbH, Schleswig-Holstein Agency for Coastal Defence, National Park and Marine Conservation, National Park Authority
Seagrass meadows are among the most productive ecosystems in the world and offer a great number of ecosystem services, and their systematic monitoring is critical for environmental protection, climate research and coastal management. Seagrass meadows support biodiversity by providing habitat, shelter and food for many marine organisms. Dense seagrass meadows reduce coastal erosion by reducing wave energy and stabilizing sediments. Seagrass is also an efficient carbon sink and plays a pivotal role in the global carbon cycle and for climate protection. Seagrass meadows respond quickly to changing environmental conditions which makes them an important water quality bio-indicator for coastal ecosystems used in many regions including the European Union within the EU Water Framework Directive. Detection and characterization is therefore part of environmental monitoring programmes. In the Wadden Sea along the North Sea Coast of Denmark, Germany and The Netherlands, seagrass is distributed very unequal. While large and dense seagrass meadows cover the North Frisian Part in Germany during summer months, they hardly occur in other parts. Sentinel-2 is already being operationally used for seagrass mapping in the German Wadden Sea in Schleswig-Holstein but the limited spectral resolution of MSI complicates distinguishing between different types of aquatic vegetation. Here, multi-temporal and hyperspectral information are currently added to the classification scheme of the existing services. The multi-temporal approach enables the differentiation of seagrass and microphytobenthos (diatoms) as well as brown macroalgae such as Fucus vesiculosus. After the seagrass perishes in winter, a new lifecycle starts with strong growth in the following May with its maximum in August/September, while the microphytobenthos often have a maximum occurrence in spring. Parts of this work is funded by the EU project FOCCUS - Forecasting and observing the open-to-coastal ocean for Copernicus users. While multi-temporal approaches use the seasonality of different species, the hyperspectral information is used for separating species or at least algae groups that along their characteristic absorption features. We will investigate the inter- and intraclass spectral variability and separability to improve the identification of different intertidal habitats in the Wadden Sea, including seagrass beds, mussel beds, saltmarshes and accumulations of brown and green algae, based on existing field spectroscopy and well-known targets present in the EnMAP imagery. Alternatively, we will explore data-driven machine learning approaches based on extensive field data coming from the operational monitoring. Hyperspectral EnMAP observations offer novel opportunities to improve the operational seagrass mapping service and to prepare for operational satellite missions such as CHIME. This activity is performed within the SEK project starting early 2025, co-funded by the Federal Ministry for Economic Affairs and Climate Action via the German Space Agency. The Wadden Seas is a challenging environment, very dynamic, half-time water covered and characterized by fluent transitions. Seagrass and macro algae coverage can vary between 5 and 100% coverage, can be partly water covered, can be covered by thin mud layers after calm weather conditions and can occur in different proportions. In addition to the known species, the Wadden Sea is experiencing the occurrence of invasive species such as the red macro algae Gracilaria vermiculophylla. Therefore, a separation of different species becomes more and more important for the assessment of the ecological health of the Wadden Sea. A good concept for using complementary data from ground based measurements and from Earth Observation techniques is a key for a concise monitoring of a sensitive ecosystem between land and ocean. We will present the status of the work leading to an improved operational service for the monitoring of the Wadden Sea in Schleswig-Holstein.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.49/0.50)

Presentation: Earth Observation for Advanced Marine Habitat Mapping

Authors: Branimir Radun, Kristina Matika Marić, Luka Raspović, Josipa Židov, Zrinka Mesić, Ivan Tekić, Ivona Žiža, Bruno Ćaleta, Ivan Tomljenović, Ante Žuljević, Ivan Cvitković, Dragan Bukovec
Affiliations: Oikon Ltd. - Institute Of Applied Ecology, Department of Wildlife Management and Nature Conservation, Karlovac University of Applied Sciences, Laboratory for Benthos, Institute of Oceanography and Fisheries
Early 2024 marked the publication of the official map of coastal and benthic marine habitats of Croatia, encompassing the national coastal sea and the Croatian Exclusive Economic Zone (EEZ). Spanning 51% of the Adriatic Sea under Croatian jurisdiction—approximately 30,278 km²—this map represents one of the most extensive and intricate marine habitat mapping efforts in Europe. It was produced over a period of 25 months and provides habitat information at three scales (1:25,000, 1:10,000, and 1:5,000), tailored to the varying protection levels and management needs of different marine areas. The mapping process was built on the foundation of Earth Observation technologies and spatial analytics, integrating Satellite-based Earth Observation, Aerial Photogrammetry, in-situ surveys, and acoustic methods. Remote Sensing was pivotal for mapping habitats up to depths of 20 meters, utilizing multispectral imagery from the Sentinel-2 satellite constellation. Data for deeper regions were captured using acoustic methods, including multibeam and side-scan sonar, while over 4,000 in-situ transects were conducted for ground truthing and validation. Advanced methodologies such as Object-Based Image Analysis (OBIA) and Pixel-Based Image Analysis (PBIA) were employed to achieve high spatial resolution and detailed habitat classification. OBIA was used to process aerial ortho-maps at 0.5 m resolution, enabling precise segmentation and habitat delineation. PBIA leveraged 110 seasonal Sentinel-2 images to analyze temporal dynamics and classify seagrass species such as Cymodocea nodosa and Posidonia oceanica. The integration of these datasets was performed using advanced Geographic Information System (GIS) tools and spatial statistics, resulting in a high-resolution map with up to three habitat types assigned per spatial feature. The cartographic generalization algorithm custom-developed for this project ensured spatial, topological, and thematic accuracy in the final product. Notably, key elements of the methodologies applied in this effort were initially developed through ESA-funded activities, highlighting the crucial role of space-based data and technologies in advancing marine conservation and resource management. The resulting map provides a valuable tool for Natura 2000 site management, ecological network planning, marine spatial planning, and sustainable resource management. Furthermore, it establishes a scalable model for marine habitat mapping that can be adapted to other regions, addressing the growing need for robust, data-driven conservation solutions. By bridging Earth Observation, field surveys, and advanced spatial analytics, this project provides critical insights for biodiversity stakeholders, helping to mitigate climate and human-induced pressures on marine ecosystems.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall L1/L2)

Session: F.02.16 GFOI Session on Tropical Forest Monitoring

Countries are working to advance Forest Monitoring for emissions and removals measurement, reporting and verification (MRV). The objective of the MRV is typically to quantify the country’s contribution to meeting the Paris Agreement goals and/or to gain access to climate finance. The GFOI is facilitating a process for country-led planning (CLP) where countries work towards building institutions and strengthening functional aspects of forest monitoring.

The principal objective of the agora is to advance knowledge exchange and joint learning among countries on technical aspects surrounding forest MRV. A broader discussion between science and practical implementation will be fostered.

The agora will be organized together with the GFOI Office and in close collaboration with GFOI partners and represented developing countries.

Moderators:


  • Daniela Requena Suarez - GFZ
  • Frank Martin Seifert - ESA

Panelists:


  • Daniela Requena Suarez - GFZ
  • Frederic Achard - JRC
  • Javier Garcia Perez - FAO
  • Sarah Carter - WRI
  • Natalia Malaga Duran - GFZ
  • Andy Dean - Hatfield
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall E1)

Session: C.03.07 The Copernicus Sentinels: from first to Next Generation missions - development status and technology challenges

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


Sentinel-1: Mission Continuity through Next Generation Enhancements


  • Ramon Torres
  • Malcolm Davidson
  • Dirk Geudtner
  • Tobias Bollian

Sentinel-2: development, technology & Next Generations mission status: evolutions from Sentinel-2


  • Janice Patterson
  • Francisco Reina

Sentinel-3 Optical: development, technology & Next Generations mission status Sentinel-3 (AOLCI and ASLSTR)


  • Nic Mardle
  • Simone Flavio Rafano Carna

Sentinel-6: development, technology mission status: The technology behind the sea level record


  • Alejandro Egido
  • Pierrik Vuilleumier
  • Julia Figa
  • Lieven Bydekerke

Sentinel-3 Topography: development, technology & Next Generations mission status: On the way towards operational swath altimetry


  • Alejandro Egido
  • Pierrik Vuilleumier

Sentinel 6 Next Generation: Status of Mission definition and next steps


  • Bernardo Carnicero Dominguez
  • Agathe Carpentier
  • Robert Cullen
  • Alejandro Egido
  • Valeria Gracheva
  • Marcel Kleinherenbrink
  • Martin Suess
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 2

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Signatures of Cholera Outbreak in Long-Term Seasonal Rainfall Trends and Urban Built Patterns in Chandigarh, India

Authors: Dhritiraj Sengupta, Dr Neelam Taneja, Aswin Sachidanandan, Dr Shubha Sathyendranath, Dr Gemma Kulk, Dr Anas Abdulaziz, Dr Nandini Menon, Ranith Rajamohananpillai, Dr Craig Baker Austin, Dr Nicholas Thomson, Elin Meek
Affiliations: Plymouth Marine Laboratory, Plymouth, UK, Post Graduate Institute of Medical Education and Research,, National Centre for Earth Observation, Plymouth Marine Laboratory, National Institute of Oceanography, Nansen Environmental Research Centre, Centre for Environment, Fisheries and Aquaculture Science (CEFAS),, The Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus,
Cholera, caused by Vibrio cholerae, is highly sensitive to environmental factors, particularly rainfall, which influences water contamination and disease transmission. This study examines the role of long-term rainfall patterns in Cholera outbreaks in Chandigarh, India, over 21 years (2002–2023). Chandigarh, located in northern India, sits at the foothills of the Shivalik range of the Himalayas. Geographically, it lies near the Indo-Gangetic plain, with a terrain that transitions between flat fertile plains and low rolling hills. The city’s location gives it a humid subtropical climate, characterized by distinct seasonal variations, including hot summers, a monsoon season, and mild winters. Using satellite-derived rainfall data from CHIRPS: Rainfall Estimates from Rain Gauge and Satellite Observations, the research analyzes the impact of extreme seasonal variations on Cholera incidence. Chandigarh experiences an average annual rainfall of 1,100 mm, with significant peaks during the monsoon (July–September). The study identifies a strong correlation between excessive weekly rainfall, particularly when precipitation exceeds 100 mm, and spikes in Cholera cases. Urban flooding caused by heavy rainfall events contaminates water supplies and creates favorable conditions for V. cholerae proliferation, especially in areas with dense, poorly planned settlements. Such environmental conditions are closely linked to heightened transmission risk. Analysis reveals that weekly rainfall trends during monsoon weeks (22–35) explain Cholera outbreaks more effectively than annual averages. These rainfall events disrupt urban drainage systems, exacerbating waterborne disease risks. The findings underscore that the timing and intensity of rainfall anomalies, rather than cumulative rainfall, are critical predictors of Cholera outbreaks in Chandigarh. This study highlights the interplay between climatic variability and urban infrastructure in driving disease transmission. Increasing monsoon extremes, coupled with urban congestion, amplify the risk of Cholera outbreaks. The insights underscore the need for targeted public health interventions, such as improved drainage, access to clean water, and early warning systems. By integrating climatic data with health records, the research provides a framework for understanding and mitigating the impact of extreme weather events and congested urban planning on public health. These findings offer valuable implications for early-warning mechanisms and disease management in similar urban settings across South Asia.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Leveraging earth observation for understanding forest disturbances and malaria vector ecology in the Malaysian Borneo

Authors: Edgar Manrique, Benny Obrain Manin, Clarissa Balinu, Kamruddin Ahmed, Jason Matthiopoulos, Brian Barret, Kimberly Fornace
Affiliations: School of Biodiversity, One Health and Veterinary Medicine, University of Glasgow, Borneo Medical and Health Research Centre (BMHRC), Universiti Malaysia Sabah, School of Geographical & Earth Sciences, University of Glasgow, Glasgow, Scotland, UK, Saw Swee Hock School of Public Health, National University of Singapore
Earth Observation (EO) is a vital component of One Health, linking human, animal, and environmental health through large-scale monitoring of environmental conditions affecting disease ecosystems. Malaria, a persistent global health issue, is strongly influenced by Land Use and Land Cover (LULC) changes, particularly deforestation, which increases disease risk and vector abundance. Traditional studies often focus on human settlements, neglecting mosquito interactions with diverse environments and leading to gaps in understanding transmission patterns. Effective malaria control requires data collection beyond households to include human activities in forest fringes, where mosquito-host interactions and exophagic mosquito behavior heighten transmission risk. EO technologies provide valuable insights to address these complexities and enhance vector control strategies. The most critical landscape features influencing malaria vector distribution are the availability of breeding sites and forest disturbances, such as deforestation. Optical satellite imagery often fails to detect small aquatic habitats and is limited in cloudy regions, emphasizing the need for finer spatial resolutions and complete time series. Differentiating drivers of forest disturbances, such as selective logging, forest fires, or land-use changes, is essential to understanding their environmental impact. This study integrates spaceborne SAR and optical imagery from Sentinel-1, ALOS-2, and Sentinel-2, along with drone-collected imagery, to overcome these challenges and accurately identify vector habitats and deforestation events. The approach highlights the utility of advanced satellite and drone technologies to improve environmental monitoring and malaria vector control. This study focuses on the region of Sabah in the Malaysian Borneo, where malaria transmission persists in forested regions. The aim is to understand how changes in forest structure impact malaria vectors dynamics and develop a novel surveillance system using EO data. In Malaysia, the increase in Plasmodium knowlesi, a zoonotic malaria, is strongly linked to deforestation. The primary vector, Anopheles balabacensis, is found in various land cover types and is mostly exophagic. The study leverages Bayesian geostatistical models and machine learning algorithms to evaluate and predict mosquito abundance. Initial mosquito data from SAFE repository (2012-2014), monthly collections using Mosquito Magnet traps from 2023 to 2025, and combined EO datasets, will inform simulations used to evaluate the surveillance approach and study the spatio-temporal dynamics of anopheles abundance and its relation to forest disturbances. The study reveals significant differences in mosquito abundance across five land cover types, emphasizing the need for detailed environmental monitoring. Anopheline mosquitoes were more abundant in pulpwood plantations and secondary forests compared to built-up areas, while oil palm plantations and primary forests showed no significant differences. Vector diversity was highest near secondary and primary forests, with pulpwood plantations showing low diversity. Spatial clustering of abundance and diversity near land-use transitions, such as forest edges and plantations, highlights the impact of landscape changes on malaria vector distribution.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Temporal modeling of surface water bacteriological quality and diarrheal diseases in West Africa using remote sensing and machine learning methods

Authors: Marc-Antoine MANT, Elodie Robert, Edwige Nikiema, Manuela Grippa, Laurent Kergoat, Moussa Boubacar Moussa, Beatriz Funatsu, Javier Perez-Saez, Emma Rochelle-Newall, Marc Robin
Affiliations: LETG - CNRS - Nantes Université, GET, Université Toulouse III, CNRS, IRD, CNES, IRD - iEES-P, Université Joseph Ki-Zerbo, Hôpitaux universitaires de Genève
In 2021, diarrheal diseases were responsible for around 1.17 million deaths worldwide (GBD, 2024). Sub-Saharan Africa is one of the most impacted regions. In 2024, they caused some 440,000 deaths in this region. This high mortality rate can be explained by 1) significant bacteriological pollution of surface waters by pathogenic micro-organisms responsible for diarrheal diseases (E. coli, Salmonella spp., Shigella spp., etc.), 2) high concentration of suspended solids which provides a substrate and refuge for these pathogens, 3) widespread use of untreated water for domestic, washing and horticultural purposes, and, 4) lack of sanitation and community health infrastructures. In addition the social and political insecurity and major demographic changes facing sub-Saharan Africa make access to drinking water and healthcare difficult for part of the population. Finally, ongoing climate change is likely to have a negative impact on water resources, both in terms of quantity and quality, and potentially increase the presence, dissemination and transmission of pathogens. Indeed, climate change is expected to increase the relative risk of diarrheal disease in tropical and subtropical regions from 22% to 29% by 2070-2099 (Kolstad & Johansson, 2011). Tele-epidemiology, i.e. the combination of satellite observations and epidemiology, is a powerful tool for studying climate-environment-health relationships and for understanding and predicting the spatio-temporal distribution of pathogens and diseases through the use of satellite and in-situ data. We aim at using satellite and in-situ data to indirectly monitor water quality and reveal environmental factors conducive to the emergence of critical health situations by modeling the dynamics of E. coli and cases of diarrhea in West Africa. E. coli is considered the best indicator of faecal contamination (IFC) in temperate zones, and is recommended by the World Health Organization as a proxy for assessment of water contamination and the associated risk of diarrheal disease. In Burkina Faso, Robert et al (2021) demonstrated a significant correlation between E. coli, intestinal enterococci and cases of diarrhea. E. coli therefore appears to be a good IFC in West Africa, and would be relevant for predicting diarrheal diseases. The first objective is to study the links that exist between E. coli concentration in water and certain environmental parameters 1) measured in-situ in Burkina Faso, precisely in the Bagré reservoir, from 2018 to 2024 (dissolved oxygen, concentration of suspended particulate matter - SPM, water conductivity, water temperature, particulate organic carbon - POC, dissolved organic carbon - DOC etc.), 2) measurable by satellite (NDVI or surface water reflectances measured by Sentinel-2 to inverse SPM, POC) or 3) estimable by satellite algorithm (precipitation estimated by IMERG, hydrometeorological parameters estimated by GLDAS - specific air humidity, soil moisture, surface runoff, etc.). We also investigate the relationships between these environmental parameters (in-situ, remote sensing and earth observations data) and the number of cases of diarrheal diseases (data obtained from three health centers in the area). We then use key environmental parameters to model the concentration of E. coli in Bagré reservoir and diarrheal diseases over several years, firstly using these key environmental parameters, and then only using satellite data to study their robustness. Random Forest and Gradient Boosting regression trees were used for modeling. These are machine learning algorithms in which the algorithm will learn the non-linear relationships between variables and then attempt to estimate the variable to be explained (E. coli and cases of diarrhea) from an unknown dataset. The best model (Random Forest) revealed that the dynamics of E. coli in Bagre reservoir depends mainly on POC, SPM and air humidity, which are parameters that can be derived by satellite. The model had showed a R² of 0.84 (RMSE 0.4 log10 MPN/100mL) using in-situ and satellite data, and R² of 0.62 with only satellite data (RMSE 0.62 log10 MPN/100mL). Concerning the cases of diarrhea, the first PLS model applied on one year and monthly data had showed a R² of 0.81 using in-situ and satellite data, and R² of 0.76 with only satellite data (Robert et al. 2021). The challenge now is to test machine learning methods on 7 years with more precise time step (daily or weekly) and more types of data to have a finer temporal prediction. This work will allow to create health hazard indices that can be used by public health players, firstly in West Africa without the need to collect data in the field, and then more generally for other sites facing similar public health problems.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: Survey on sanitation and microbial pollution for assessment of risk from climate change and water-borne diseases - case study from Kerala, India

Authors: Dr Nandini Menon, Ranith Rajamohananpillai, Farzana Harris, Vishal Vasavan, Dr Anas Abdulaziz, Jasmin Chekidhenkuzhiyil, Grinson George, Gemma Kulk, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, CSIR-National Institute of Oceanography, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory
Climate change and associated extreme events pose risks to public health through disruption of sanitation facilities and hygiene practices, especially in low-lying coastal and inland regions that are prone to flooding due to rising sea levels, storm surges and intense precipitation. The impacts of these challenges on sanitation, microbial water quality and hygiene factors were investigated in the districts adjoining a large water body, Vembanad Lake, as well as in selected coastal areas of the Arabian Sea, in the state of Kerala, India. A digital household survey on sanitation, hygiene and disease prevalence was conducted using the mobile application ‘CLEANSE’, whereas the microbial and physical quality of drinking water was assessed using another mobile application ‘Aquadip’ from about 500 households in the study area. The results showed that not only extreme weather events, but even spring tides flood a majority of the houses in the study area, contaminating surface water and sewage-disposal systems. The microbial quality of the water correlated significantly with the prevalence of waterborne diseases such as diarrhoea. The vulnerability of communities to risk of waterborne diseases was assessed using an Analytical Hierarchy Process (AHP), employing nine variables that significantly influenced sanitation and hygiene. The results indicated that open defaecation, source of water used for household activities, proximity of drinking water source to septic tanks, improper management of sewage and wastewater, and disposal of solid wastes were the major factors contributing to water contamination and poor health. Around 5% of surveyed households lacked proper septic tank systems, resulting in the discharge of untreated sewage and pollutants into the drains, backwater and coastal sea. Presence of rodents was high in the areas contaminated with solid waste. Though most of the participants of the survey reported minimal occurrences of cholera or leptospirosis, those with drinking water heavily contaminated with faecal indicator bacteria (i.e., too numerous to count with the most probable number method) often had disease symptoms, including diarrhoea and vomiting. In the absence of clinical data, it is difficult to pinpoint the source of the symptoms, but such discrepancies highlight a disconnect between perception and reality of risk, highlighting the need for regionally-targeted awareness campaigns, since there is potential to reduce existing risks by changing habits and behaviour of affected population. Spatial risk maps produced using the results of the study identify areas that are vulnerable or at risk of waterborne diseases. Climate change adaptation and climate change mitigation strategies should also go hand in hand to limit the complex disease dynamics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.94/0.95)

Presentation: A remote sensing assessment of floating macrophyte cover dynamics in Lake Vembanad, India

Authors: Emma Sullivan, Thomas Jackson, Gemma Kulk, Dr Nandini Menon, Jasmin C, R Ranith, Dr Shubha Sathyendranath, Dhriti Sengupta, Varunan Theenathayalan, Victor Martinez Vicente
Affiliations: Plymouth Marine Laboratory, National Centre of Earth Observation, EUMETSAT Climate Services Group, Nansen Environmental Research Centre, Environment and Climate Change Canada, Canada Centre for Inland Waters
Invasive floating vegetation, particularly water hyacinth (Eichhornia crassipes), has become an environmental and socioeconomic challenge in many water bodies in India. These macrophytes form dense interlocking mats that can spread quickly, adversely impacting ecosystems, livelihoods, and human health. In the Vembanad-Kol wetland system in Kerala, southern India, water hyacinth has become a major environmental problem. The Vembanad-Kol wetland system is India’s second-largest Ramsar site, an area of wetland designated to be of international importance. The region’s waterways are an essential part of daily life, serving as transportation routes and supporting livelihoods such as fishing, tourism, and paddy cultivation. However, the spread of water hyacinth can block these waterways, causing disruption to incomes and requiring labour-intensive and expensive removal operations. Water hyacinth may also pose public health risks by degrading water quality and creating conditions favourable for harbouring disease vectors. Understanding the spatial and temporal dynamics of floating vegetation is the foundation for exploring the possible drivers dictating it’s distribution, links with environmental and human health, and potential management interventions. However, monitoring has traditionally used field-based measurements which are time-consuming, and spatially limited by site accessibility and resources. In large aquatic systems such as Lake Vembanad (>2000 km²), regular assessment using field surveys alone is not feasible for logistical and economic reasons. Consequently, little information is available on the distribution and temporal changes in floating vegetation over long periods and at the scale of the entire wetland system. Remote sensing is a uniquely capable tool for mapping invasive plant species as it can provide information over large areas at regular intervals at a lower cost than intensive field surveys. In this work, we use the satellite data from Sentinel-2 to assess the temporal and spatial dynamics of floating macrophytes over Lake Vembanad from 2016 to mid 2024. This study demonstrates how Sentinel-2 data can be used to describe the temporal and spatial dynamics of floating macrophytes at an internationally important wetland site. The Floating Algae Index (Hu, 2009) was used to separate water and possible vegetation cover. Next, a scaling approach was used to calculate the percent cover of floating vegetation for each pixel. Estimates of vegetation cover from Sentinel-2 data were validated using in situ observations and high-resolution WorldView satellite data. Monthly composites were used to explore the phenology of, and trends in, the floating vegetation in Lake Vembanad as a whole, and also separately for the northern and southern parts of the lake, which have different hydrodynamic regimes. Climatological patterns in lake vegetation cover were compared with ancillary environmental data to explore potential drivers of floating vegetation proliferation. Initial results suggest there has been a significant increase in floating vegetation cover over the study period. The analysis highlights particular hotspots of vegetation accumulation, and demonstrates a clear seasonality, associated with changes in salinity, in floating macrophyte cover which varies between lake regions. In situ monitoring of water hyacinth indicates that the root system can harbour pathogenic bacteria and larvae of mosquitoes and gastropods, which are vectors of many diseases, pointing to a health implication associated with the spread of water hyacinth in the region. The data and analysis produced from this study can help future work to model floating vegetation distributions, investigate potential connections to human health, and inform management decisions on possible interventions. Ongoing efforts to acquire in situ data will continue to support product validation as the record extends into the future. We also hope that future studies might build on this work to explore the feasibility of identifying the dominant species of floating macrophytes with the current sensor technology.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Session: B.04.01 Satellite based terrain motion mapping for better understanding geohazards. - PART 2

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Enhancing the P-SBAS Processing Chain for L-Band DInSAR Time Series Retrieval: Insights from the SAOCOM-1 Constellation

Authors: Marianna Franzese, Ph.D Claudio De Luca, Yenni Lorena Belen Roa, Ph.D Manuela Bonano, Ph.D Francesco Casu, Ph.D Pablo Euillades, Ph.D Leonardo Euillades, Ph.D Michele Manunta, Ph.D Muhammad Yasir, Ph.D Giovanni Onorato, Ph.D Pasquale Striano, Ph.D Luigi Dini, Dr Deodato Tapete, Ph.D Riccardo Lanari
Affiliations: Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA), CNR, Università degli Studi di Napoli Federico II, Istituto per il Rilevamento Elettromagnetico dell'Ambiente (IREA), CNR, Conicet, Instituto CEDIAC, Facultad de Ingenierìa, Universidad Nac de Cuyo, Italian Space Agency (ASI)
In the current Earth Observation scenario, the Differential Synthetic Aperture Radar Interferometry (DInSAR) technique is widely recognized for investigating the surface displacements affecting large areas of the Earth's surface with high accuracy. Such a technique is particularly useful in both natural and anthropogenic hazard scenarios, thanks to its capability to retrieve ground displacements with centimeter (in some cases millimeter) accuracy at rather limited costs. Originally developed to analyze single deformation episodes such as an earthquake or a volcanic unrest event, the DInSAR methods are also capable of investigating the temporal evolution of surface deformations. Indeed, the so-called Advanced DInSAR techniques properly combine the information available from a set of multi-temporal interferograms relevant to an area of interest, in order to compute the corresponding deformation time series. Among several Advanced DInSAR algorithms, a widely used approach is the one referred to as Small BAseline Subset (SBAS) technique and its computationally efficient algorithmic solution is referred to as Parallel Small BAseline Subset (P-SBAS) approach. However, the effectiveness of the DInSAR technique can be limited by the temporal decorrelation phenomena, which arise from changes in the electromagnetic response of the imaged scene over time. In this context, low-frequency SAR sensors, such as those operating at the L-band, characterized by a significantly longer wavelength (~23 cm) compared to the X-band (~3 cm) and C-band (~5.6 cm) ones, are well-suited to mitigate the temporal decorrelation effects, thanks to their capacity of maintaining interferometric coherence for a long period in rather vegetated areas and, in some cases, in snow/ice-covered zones. Moreover, the long wavelength makes the L-band highly effective for detecting, assessing, and, in some cases, monitoring rapid or large deformations associated with various geohazards, including landslides, earthquakes, and volcanic unrests. The crucial role of L-band systems in hazards scenario is pushing worldwide the space agencies to invest in the development of this technology. The forthcoming missions ROSE-L (developed by ESA), NISAR (a joint NASA-ISRO project), as well as the already operative SAOCOM-1, ALOS-2, ALOS-4, and Lutan-1, highlight the growing importance of L-band SAR sensors, particularly for interferometric applications. The presented work focuses on extending the original P-SBAS interferometric processing chain to handle L-band SAR image time series, particularly those acquired by the Argentinean SAOCOM-1 constellation, which consists of two twin, full-polarimetric L-band SAR satellites operating on the same orbits of the COSMO-SkyMed first and second generations (CSK and CSG, respectively). In particular, we present several improvements of the available P-SBAS processing chain to expand its monitoring capability beyond the exploitation of X-band (e.g., CSK-CSG and TerraSAR-X) and C-band (e.g., ERS-1/ERS-2, ENVISAT, RADARSAT-1/2) StripMap SAR images, enabling the analysis of L-band data from the SAOCOM-1 constellation. The algorithmic advancements focus on the merging of adjacent Single Look Complex (SLC) image slices and the improvement the quality of the orbital information. Furthermore, an advanced method to estimate and correct ionospheric effects, which are particularly pronounced in L-band SAR datasets, has been developed. Regarding the implementation of the Area of Interest (AoI) Single Look Complex (SLC) generation, it is important to note that SAOCOM-1 images are provided as “slices” with a typical azimuth extension of approximately 70 to 100 km. Consequently, particularly for large scale DInSAR analysis, these slices must be properly merged into a single SLC image corresponding to the AoI. While slice merging is an ordinary and common procedure in DInSAR applications, it presents specific challenges when working with SAOCOM-1 SLC data. To address these aspects, two key steps were included in the P-SBAS processing chain: - Slice resampling on a common temporal grid; - Phase shift estimation and compensation. For what concerns the low accuracy in the SAOCOM-1 state vectors information, these result in an incorrect estimation of the topographic phase component within the DInSAR interferogram generation process and, therefore, it introduces artefacts in the interferometric phase (that, at the first order, can be represented by a sort of phase ramp) which may significantly degrade the quality of the DInSAR products, if no appropriate correction is introduced. To address orbital phase artifacts in SAOCOM-1 data, a two-step correction process is applied, which benefits from the redundancy of the generated SBAS interferograms and retrieving an orbit correction for each single SAR acquisition of the exploited dataset. Finally, this work also aims to highlight the importance of detecting and mitigating ionospheric disturbances, which can cause distortions in L-band SAR images, particularly in the Earth's low- and high-latitude regions due to intense solar radiation and geomagnetic interactions. The presence of ionospheric effects in SAR images can significantly impact both the phase and amplitude of the acquired data, compromising the accuracy of ground displacement measurements obtained through DInSAR, Multi-Aperture Interferometry (MAI), and Pixel Offset Tracking (POT) methods. It is important to emphasize that the proposed algorithmic solutions may play a very relevant role in light of the upcoming availability of new L-band satellite SAR systems for geohazards analysis and monitoring.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Operationalisation of Satellite Interferometry for Geotechnical Monitoring During Subway Construction in Prague

Authors: Jan Kolomaznik, Ph.D. Ivana Hlavacova, Ph.D. Juraj Struhar
Affiliations: Gisat s.r.o.
A new subway line is under construction in Prague, Czechia, traversing from the city’s peripheral regions to its central districts through geologically complex and challenging conditions. The line frequently intersects stratigraphic layers and boundaries of heterogeneous geological units and aquifers of varying ages and mechanical properties. Furthermore, the tunnel tubus and stations are situated at varying depths, resulting in diverse deformation patterns and intensities that influence the surrounding buildings and infrastructure. As part of the comprehensive geotechnical monitoring for the Metro Line D route between Pankrác and Olbrachtova stations, GISAT is conducting operational surface deformation monitoring through satellite radar interferometry (InSAR). Interferometric measurements from TerraSAR-X/PAZ and Sentinel-1 satellites are being analysed using a customised Persistent Scatterer Interferometry (PS-InSAR) algorithm to detect temporal changes in displacement. The monitoring is organised into several stages, beginning with a retrospective “passportization stage,” which leverages archived SAR datasets to establish baseline conditions. Subsequent stages adaptively modify the stage duration and frequency of satellite acquisitions to capture evolving deformation patterns during construction. Each monitoring phase yields thousands of temporally persistent scatterer (t-PS) measurements, complementing conventional geodetic monitoring conducted at over 900 stabilised points. This dual approach enhances spatial coverage and facilitates the identification of deformation anomalies beyond anticipated impact zones. The resulting deformation map detects surface and building movement chronology and patterns with high precision and synoptic spatial coverage without requiring point stabilisation. The observed tunnel-induced displacement patterns exhibit significant spatial heterogeneity and temporal nonlinearity. In areas of heightened subsidence risk, subsurface concrete injections frequently induce temporary uplift, reversing surface motion trends. This spatial and temporal variability in displacement subjects structures to complex and differential strain forces, with potential implications for structural integrity. Some of the detected subsidence is attributable to tunnelling activity indirectly; hydrological effects, such as aquifer drainage, contribute to displacement phenomena in areas distant from construction zones. The nonlinear characteristics of displacement trends present challenges for InSAR analysis, particularly in X-band data, where the wavelength is commensurate with the magnitude of deformation, leading to phase unwrapping errors. These errors compromise spatial interpretation, elevate noise levels, and diminish the reliability of results. To address these challenges, GISAT has introduced innovative enhancements to the PS-InSAR methodology, including advanced segmentation of time series data considering statistically significant differences in displacement velocities or noise levels. Segments classified as unreliable due to noise or unwrapping artefacts are excluded from interpretation. Additional insights are derived from complementary C-band sensors to enhance robustness. Validation confirms strong agreement between displacement trends measured through InSAR and conventional geotechnical methods.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Enhancing the understanding of present-day and future urban subsidence risk in Italy based on multi-scale satellite InSAR workflows and advanced modelling

Authors: Dr Francesca Cigna, Dr Roberta Bonì, Prof Pietro Teatini, Dr Roberta Paranunzio, Dr Claudia Zoccarato
Affiliations: National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC), University School for Advanced Studies (IUSS) - Department of Science, Technology and Society (STS), University of Padua (UNIPD) - Department of Civil, Environmental and Architectural Engineering (ICEA), National Research Council (CNR) - Institute of Atmospheric Sciences and Climate (ISAC)
Italy is among the world countries with the largest estimated groundwater extractions. When groundwater withdrawal and natural discharge exceed recharge rates, aquifer systems are over-exploited, resulting in resource depletion, storage loss and compaction of confining clay beds. The induced land subsidence may cause direct/indirect impacts on urban landscapes (ground depressions, earth fissures, structure damages, increased flood risk, loss of land to water bodies) and economic loss. High to very high subsidence susceptibility and hazard levels characterize many Italian regions, and a number of subsidence hotspots have been observed using satellite Interferometric Synthetic Aperture Radar (InSAR) methods, such as the Po River and Florence-Prato-Pistoia plains. The novel project SubRISK+ (https://www.subrisk.eu) innovates in this field by providing new EO-derived products and tools aiming to enhance the understanding of subsidence risk in major urban areas of Italy, towards sustainable use of groundwater resources and urban development. The project is assessing current and future subsidence risk in Italy using a multi-scale methodology, with implementation spanning from the national to the local scale. Risk is estimated with matrix-based risk assessment approaches, embedding InSAR-derived ground deformation observations (e.g. Copernicus’ European Ground Motion Service, EGMS), hydrogeological, topographic and land use data. Hazard levels are estimated via computation of angular distortion and horizontal strain induced by differential settlement on urban infrastructure, as derived from InSAR datasets. Exposure and vulnerability are assessed based on type and height of urban infrastructure, and geospatially combined with hazard information via a risk matrix to derive risk levels, from very low to very high (e.g. R1 to R4). Statistics on the population living within the various risk categories are finally extracted. At regional scale, accurate detection of hotspots and drivers is enabled by implementing advanced geostatistics and exploiting time series analysis tools, including Independent Component Analysis (ICA) and Optimized HotSpot Analysis. An advanced numerical model coupling 3D transient groundwater flow and geomechanics of heterogeneous aquifer systems is also being developed to quantify the effects of groundwater usage to land deformation, and estimate uncertainties at local-scale in a highly vulnerable city. The output from the groundwater flow model serves as input in the deformation model, utilizing the same computational grid and distribution of mechanical parameters to create a consistent flow-deformation model. A calibration procedure is implemented where uncertainties associated to available piezometric records and InSAR measurements and from modelling approach are integrated to estimate the parameters of both the groundwater flow and deformation models. A tailored socio-economic impact analysis is being developed to quantify market and non-market direct/indirect losses at national, regional and local scale, based on affected areas’ exposure, vulnerability and resilience. Future subsidence risk by 2050 and 2100 under climate change (RCP4.5/8.5, medium/high emissions), demographic and urban development, is assessed for the metropolitan cities and, locally, by adapting the numerical model to support long-term risk predictions under different scenarios. Predictions of future land use changes using socioeconomic and environmental parameters will contribute to an integrated, indicator-based approach at city scale that will enable assessment of urbanization patterns and identification of potential areas prone to future subsidence. The results for the 15 metropolitan cities of Italy, the whole Emilia Romagna region and the city of Bologna will showcase the potential of the developed methodology and its benefits to inform water resource management and decision making. This work is funded by the European Union – Next Generation EU, component M4C2, in the framework of the Research Projects of Significant National Interest (PRIN) 2022 National Recovery and Resilience Plan (PNRR) Call, project SubRISK+ (grant id. P20222NW3E), 2023-2025 (CUP B53D23033400001).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Free and Open-Access OPERA Surface Motion Data over North America: A Geohazard Perspective

Authors: David Bekaert, Scott Staniewicz, Grace Bato, Sara Mirzaee, Simran Sangha, Talib Cabrera, Jin-Woo Kim, Piyush Agram, Marin Govorcin, Geoffrey Gunter, Alexander Handwerger, Matthew Calef, Heresh Fattahi, Zhong Lu, Batuhan Osmanoglu, Elodie Macorps, Steven Chan, Emre Havazli
Affiliations: Jet Propulsion Laboratory/ Caltech, Southern Methodist University, Goddard Space Flight Center, Descartes Lab
Remote sensing satellites provide key data that can be used to better understand the Earth, respond to disaster events, and inform decisions on pressing climate and environmental issues. For decades, many space agencies have provided high quality remote sensing data free of charge for their end-users. Although these data have been accessible, and widely used, the raw remote sensing measurements can be challenging to use and analyze particularly for non-specialists. Thus, projects such as European Ground Motion Service (EGMS) and the NASA JPL Observational Products for End Users from Remote Sensing Analysis (OPERA) aim to leverage state-of-art processing environments to create earth observations products over continental scales spanning Europe and North America, respectively. The OPERA project (https://www.jpl.nasa.gov/go/opera/) is processing multiple streams of data from optical (Landsat 8, Sentinel 2 constellation) and SAR-based (Sentinel 1 constellation, NISAR) missions to bring the data to higher-level Analysis Ready Datasets (ARD). These datasets are aimed to address the remote sensing needs of US Federal agencies under the Satellite Needs Working group (SNWG). OPERA products relevant to the theme of the session “satellite based terrain motion mapping” include a coregistered and geocoded SLC product suite, a surface displacement product suite, as well as a vertical land motion product suite from both Sentinel-1 and NISAR. With a geographical focus over North America, a temporal record spanning the complete sensor record, and with product updates made as new data is acquired, the products enable broad applicability for mapping and monitoring of geohazards. OPERA’s stakeholder engagement program is used to interact with end-users, ease co-development, and importantly helps to facilitate adoption by stakeholders through focused 1-1 engagements and workshops. We will present an overview of the salient product features with illustrations for geohazards in the context of stakeholder applications such as wildfires, landslides, subsidence, volcanoes and also share experiences from our stakeholder interactions.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Classification of Ground Deformation Phenomena at Continental Scale from European Ground Motion Service Data

Authors: Riccardo Palamà, María Cuevas-González, Anna Barra, José Antonio Navarro, Oriol Monserrat, Michele Crosetto
Affiliations: Centre Tecnologic De Telecomunicacions De Catalunya (CTTC)
This work proposes a supervised classification of ground motion (GM) phenomena using as main input the European Ground Motion Service (EGMS) datasets. The availability of such an extended dataset allows implementing wide area tools to detect and classify GM phenomena, that can be useful for potential users to evaluate hazard and mitigate risks. This work implements a wide-area ground motion classifier (GMC) that categorizes areas affected by GM phenomena into four main classes, i.e. slow-moving slope deformation phenomena (mostly represented by deep-seated gravitational slope deformation phenomena, DSGSD), landslides, subsidence and uplift. The implementation of the classifier is preceded by the identification of active deformation areas (ADAs) through the ADAfinder tool [2], which yields a European map of ADA polygons. Furthermore, the ADAs that are detected in the same area by both ascending and descending Sentinel-1 orbits are merged, otherwise they are processed separately. In this way, we provide a tool that maximises – where possible - the information associated with the Sentinel-1 data at both orbit trajectories. The training dataset for the GMC is obtained by matching the European ADA map with ground truth/labelling data. The labels for the two landslide classes are obtained by matching the unlabelled data with the polygons of the Italian National Landslide Inventory [3], whereas the labels for the subsidence class are given by the subsidence map of Emilia-Romagna region (Italy) [4]. Finally, the uplift class labels are obtained from known uplift areas across Europe, e.g. the dewatering areas in the United Kingdom [5]. The ADAs that are not labelled at this stage are included in the dataset whose classification is the main goal of this work, which aims at training a model, then deploys it to obtain a European map of ground motion phenomena. The supervised ground motion classifier is implemented through the Extreme Gradient Boosting (XGB) technique. XGB belongs to the ensemble learning family and is used in various applications, due to its good performance, versatility, and capability to cope with missing values. In this work, the Catboost implementation of XGB was chosen due to its better performance [6]. The XGB classifier employs spatio-temporal features extracted, for each ADA polygon, from different data sources, i.e. the EGMS-PSInSAR data (e.g. mean velocity, acceleration, seasonality, temporal coherence of the PS displacement values), Corine Land Cover map, Digital Elevation Model (DEM) and its derived terrain attributes (local slope and aspect). The performance of the implemented model is demonstrated by high performance metric values (accuracy of 92%, precision of 91%, f1-score of 88% and recall of 86%). It should be noted that lower recall values are associated to the DSGSD class (64%), which presents shared properties with the landslide one. Furthermore, with the aim of exploring the explainability of the classification algorithm, we have evaluated the feature importance (IF) values, revealing that the more relevant features are DEM (IF ~22%), slope (IF ~20 %), mean velocity of the PS displacement time series (IF~18% for both trajectories). The trained models are deployed to classify the unlabeled ADA polygons over the whole territory covered by the EGMS, then producing a European map of classified ground motion data. Furthermore, the softmax operator is added to the final stage of the deployed classifier to provide the probability that one ADA polygon belongs to each of the considered ground motion classes. This allows to evaluate the confidence of the implemented classification, since higher probability values are associated with a higher confidence in the classification. On the other hand, lower probability values of the dominant class mean that the classification is less reliable, which may be due to the presence of mixed properties between two or more GM classes, or also show that the ADA could be associated to a deformation phenomenon that is not considered in this work. It should be noted that the four GM classes employed represent a good portion of known deformation phenomena at local scale, but the presence of deformation phenomena of different nature (e.g. seasonal deformation phenomena), whose labelling is not always feasible, yields a decrease of the classification probability values. The European map of classified ground motion phenomena has been validated using available independent datasets in the north-west part of Italy (Piemonte and Valle d’Aosta regions) and in the Spanish territory, revealing good agreement with the existing datasets. The obtained map will be published through a Web Mapping Service (WMS), which will be made available online for a wide audience, including final users (e.g. geologists and civil protection entities) or developers willing to train their algorithms on labelled GM data. This work has been funded by the European Space Agency under the Living Planet Fellowship awarded to Riccardo Palamà, with the project titled “Wide-Area Sentinel-1 Deformation Classification for Advanced Data Exploitation”. References [1] M.Crosetto, L.Solari, M. Mróz, et al. (2020). The evolution of wide-area DInSAR: From regional and national services to the European Ground Motion Service, Remote Sensing, 12(12). [2] A.Barra, L.Solari, M.Béjar-Pizarro, et al. A methodology to detect and update active deformation areas based on Sentinel-1 SAR images, Remote Sensing, 9, 1002, 2017. [3] A.Trigila, et al, Quality assessment of the Italian Landslide Inventory using GIS processing, Landslides, 7, 455–470,2010. [4] G. Bitelli, F. Bonsignore, I. Pellegrino, L. Vittuari. (2015), Evolution of the techniques for subsidence monitoring at regional scale: the case of Emilia-Romagna region, (Italy). Proceedings of the Ninth International Symposium on Land Subsidence, Nagoya (Japan), November 15-19, IAHS, 92, 1-7 [5] N. Anantrasirichai et al., "Detecting Ground Deformation in the Built Environment Using Sparse Satellite InSAR Data With a Convolutional Neural Network", in IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 4, pp. 2940-2950, April 2021, doi: 10.1109/TGRS.2020.3018315 [6] Prokhorenkova L, Gusev G, Vorobev A, Dorogush AV, Gulin A, CatBoost: unbiased boosting with categorical features, in Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018. p. 6639–49
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall K2)

Presentation: Surface Deformation and Micro-Seismic Activity Driven by Groundwater Level Changes at the Gardanne Post-Mining Site

Authors: Michael Foumelis, Dr. Eng. Jose Manuel Delgado Blasco, Dr Elena Papageorgiou, Dominique Pascale, Dr Pavlos Bonatis, Dr Daniel Raucoules, Dr Marcello de Michele, Dr Eleftheria
Affiliations: Aristotle University of Thessaloniki (AUTh), Randstad c/o ESA-ESRIN, French Geological Survey (BRGM)
Post-mining surface deformation and associated microseismicity are critical concerns for former mining sites, with changes in groundwater levels being an important influencing factor. This study investigates these phenomena at the Gardanne coal site in Provence, southern France. Following the closure of the mine in 2003 and the cessation of groundwater pumping, flooding gradually occurred and stabilized in 2010. However, subsequent pumping activities to prevent flooding resulted in fluctuating water levels that triggered periodic seismic swarms and surface deformation. Strong temporal and geographical relationships between seismic activity and changes in groundwater levels have been observed since 2010. An expanded seismic monitoring network recorded almost 2,700 occurrences between 2013 and 2018, most of which were swarms of small-magnitude events. Advanced multi-temporal Interferometric Synthetic Aperture Radar (InSAR) methods were applied using Copernicus Sentinel-1 satellite data from 2015 to 2022. To examine surface motion throughout the research region, Persistent Scatterer Interferometry (PSI) and Distributed Scatterer (DS) approaches were utilized to analyze surface motion across the study area. The findings showed patterns of surface displacement aligned with seismic clusters, particularly during the 2016–2017 seismically active periods. During this period, cumulative vertical displacements reached up to 26.4 mm. Geometric decomposition of ascending and descending measurements confirmed predominantly vertical motion with negligible horizontal displacement, consistent with the local tectonic stress regime and focal mechanisms of seismic events. Temporal analysis revealed a strong correlation between surface motion and seismic swarms, with deformation signals often preceding or persisting beyond seismic activity. Seasonal subsidence patterns unrelated to seismicity were also observed, particularly during the summer months, likely related to groundwater management practices. These findings underline the complex relationship between hydrogeological and tectonic processes influencing surface stability in post-mining environments. This study demonstrates the effectiveness of advanced InSAR techniques for monitoring subtle surface deformation and provides critical insights for managing groundwater levels and mitigating risks in post-mining areas. Future work should focus on refining models to separate seismic and aseismic contributions to surface deformation, while examining seasonal influences on displacement patterns.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall G1)

Session: C.03.08 The European Copernicus Space component: status, future prospects and challenges - PART 2

Copernicus is the European Earth monitoring program which opened a new era in Earth Observation with continuous and accurate monitoring of our planet and continuous improvement to respond to the new challenges of global change.
Since it became operational in 2014 with the launch of the first dedicated satellite, Sentinel-1A, Copernicus has provided a wealth of essential, timely and high-quality information about the state of the environment, allowing borderless environmental and emergency monitoring, and enabling public authorities to take decisions when implementing European Union policies.
The intense use and increased awareness for the potential of Copernicus have also generated great expectations leading to an evolved Copernicus system that has embraced emerging needs, new user requirements and a new commercial dimension.
This future evolution of the Copernicus program will fill observational gaps and will help monitor the “pulse” of our planet for the decades to come, but to do so, programmatic and budgetary commitments will need to be maintained.

Presentations and speakers:



Unleashing the potential of Copernicus Sentinel Data: Fuelling Europe's Digital Future



  • J. Martin - ESA, CSC Data Access System Architect

The Copernicus Contributing Missions: present and future


  • P. Fischer - ESA, EO Third Party Missions Manager

The Copernicus current Sentinel satellite missions: Sentinel-2


  • F. Gascon - ESA, Sentinel-2 Mission Manager

The Copernicus current Sentinel satellite missions: Sentinel-3


  • J. Bouffard - ESA, Sentinel-3 Mission Manager
  • H. Wilson - EUMETSAT, Sentinel-3 Project Manager

The Copernicus current Sentinel satellite missions: Sentinel-5P



  • C. Zehner - ESA, Sentinel-5P Mission Manager

The Copernicus current Sentinel satellite missions: Sentinel-6



  • B. L. Bydekerke - EUMETSAT, Copernicus Programme Manager
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Session: A.03.06 Exploring ground-based, airborne and satellite observations and concepts for the carbon cycle

The remote sensing community is abuzz with developing innovative concepts for the carbon cycle to collect crucial data at different spatial and temporal scales required to study and improve understanding of underlying geophysical processes. The observations from new airborne and ground-based instruments play a vital role in developing new applications that benefit from integrated sensing.

These new concepts need to go hand in hand with the mathematical understanding of the theoretical frameworks including uncertainty estimates. This session invites presentations on:
- innovative observations of geophysical products focussing on the carbon cycle
- Highlighting innovative applications based on integrated sensing
- feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Towards an automated sea-based ocean lidar network

Authors: Davide Dionisi, Dr. Cédric Jamet, Dr. Kelsey Bisson, Peng Chen, Paolo Di Girolamo, Yongxiang Hu, Dong Liu, X Lu, Sayoob Vadakke-Chanat, Siqi Zhang, Z. Zhang, Yudi Zhou, Iwona Stachlewska
Affiliations: CNR - ISMAR, Univ. Littoral Côte d’Opale, LOG, Ocean Biology and Biogeochemistry program, NASA Headquarters, State Key Laboratory of Satellite Ocean Environment Dynamics, School of Engineering,University of Basilicata, Science Directorate, Lidar Science Branch, NASA Langley Research Center, Ningbo Innovation Center, , Zhejiang University, University of Warsaw
Lidar techniques have proven to be a reliable and valuable tool for the study of marine environments [1]. Over the past decade, numerous Ocean Color (OC) research initiatives utilizing space-borne lidar measurements, originally dedicated to atmospheric missions (e.g. CALIPSO), have not only established a solid proof of concept for ocean space-based applications but also yielded significant scientific discoveries [2,3]. These results ushered in a ‘new era of lidar in satellite oceanography’ [4], highlighting the potential of these technologies in understanding oceanographic processes and ecosystem dynamics. Specifically, lidar technique can overcome some limitations of passive ocean color remote sensing, as it enables nighttime observations and can resolve phytoplankton vertical structure, thus reducing uncertainties in global phytoplankton biomass and net primary production estimates. Internationally established lidar networks for monitoring the atmosphere, such as the European Aerosol Lidar NETwork (EARLINET) and the Network for the Detection of Atmospheric Composition Change (NDACC), are well known. Similarly, networks dedicated to radiometric observations of ocean color, like the Aerosol Robotic Network Ocean Color (AERONET-OC), are also actively operating. The latter can be used for validation of NASA and ESA ocean color space missions by providing spectrum of the light backscattered by the ocean surface. However, information on the vertical structure of the upper ocean and on the inherent optical properties of seawater (IOPs) cannot be retrieved. Only ship-borne observations provide IOPs but those are scarce in space and time, as they are time- and labor-consuming. What we propose is a vision of an ocean lidar network. A network of in-situ lidar observations that will be complementary to the current ocean color network. It will provide parameters currently not observed and, more importantly, those observations will be vertically-resolved. Our presentation explores the potential for developing a network of observations utilizing lidar technology. It will outline the current status and developmental needs of an automated lidar network specifically designed for oceanic applications. Furthermore, we will provide recommendations regarding the requirements for such network, covering aspects of instrumentation, geographical locations, frequency of observations, ocean variables to be monitored, acceptable levels of uncertainty in data products, and quality assurance procedures. This comprehensive discussion aims to highlight the essential components necessary for establishing an effective lidar network to enhance oceanographic studies. The proposed observations will yield innovative insights into geophysical products related to the ocean carbon cycle, which could be instrumental in validating current and future ocean color space missions. Additionally, they will support next-generation missions based on lidar technology, such as the Global Scale Observations of the Ocean-Land Atmosphere System (CALIGOLA) mission [5,6] that is currently being developed by the Italian Space Agency in collaboration with NASA. [1] Jamet, C., et al., 2019. Going beyond Standard Ocean color observations: Lidar and polarimetry. Front. Mar. Sci. 6, 251. https://doi.org/10.3389/fmars.2019.00251 [2] Behrenfeld, et al., 2019. Global satellite-observed daily vertical migrations of ocean animals. Nature 576, 257–261. https://doi.org/10.1038/s41586-019-1796-9 [3] Dionisi, D.,et al., 2020. Seasonal distributions of ocean particulate optical properties from spaceborne lidar measurements in Mediterranean and Black Sea. Remote Sens. Environ. 247, 111889 https://doi.org/10.1016/j.rse.2020.111889. [4] Hostetler, et al., 2018. Spaceborne Lidar in the study of marine systems. Ann. Rev. Mar. Sci. 10, 121–147. https://doi.org/10.1146/annurev-marine-121916-063335. [5] Behrenfeld, et al., 2023. Satellite Lidar measurements as a critical new Global Ocean climate record. Remote Sens. (Basel) 15, 5567. https://doi.org/10.3390/rs15235567. [6] P. Girolamo, et al., 2022. Introducing the Cloud Aerosol Lidar for Global Scale Observations of the Ocean-Land-Atmosphere System: CALIGOLA, in Proceedings of the 30th International Laser Radar Conference. Atmospheric Sciences. Springer, Cham pp. 625–630.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Estimating methane fluxes from Arctic-boreal wetlands using observations from the CoMet 2.0 Arctic airborne mission

Authors: Fix Andreas, Sebastian Wolff, Leah Kanzler, Christoph Kiemle, Mathieu Quatrevalet, Fruck Christian, Martin Wirth, Paul Waldmann, Kerstin Hartung, Anna-Leah Nickl, Bastian Kern, Mariano Mertens, Patrick Jöckel, Sven Krautwurst, Heinrich Bovensmann, Michał Gałkowski, Christoph Gerbig
Affiliations: German Aerospace Center (DLR), Institute of Atmospheric Physics, Institute of Environmental Physics (IUP), University of Bremen, Max Planck Institute for Biogeochemistry
CoMet 2.0 Arctic (www.comet2arctic.de) has successfully been conducted within a six-week intensive operation period from August 10th to September 16th, 2022, targeting greenhouse gas emissions from boreal wetlands and permafrost areas in the Canadian Arctic, from wildfires, and from oil, gas, and coal extraction sites. Using the German research aircraft HALO with an innovative combination of scientific instruments onboard, a total of 135 flight hours have been performed. Thus, a valuable data set was acquired to help understanding the methane and carbon dioxide cycles in the Arctic and emissions from a variety of sources responsible for accelerated climate warming, particularly at high latitudes. CoMet 2.0 Arctic is embedded within the transatlantic AMPAC (Arctic Methane and Permafrost Challenge) initiative of the US and European Space Agencies, NASA and ESA, which promotes the co-operation of Canadian, US and European research institutes in this research area. A selection of the research flights concentrated on the Hudson Bay Lowlands, a vast region of boreal wetlands and permafrost in northern Canada, that store significant amounts of carbon in the form of peat deposits and frozen organic matter. However, as permafrost thaws due to climate change, natural CH4 emissions from the area are expected to increase substantially, in particular through enhanced ebullition and increasingly frequent wildfires. Monitoring these emissions is crucial for understanding the magnitude and variability of permafrost-carbon feedbacks, which could amplify global warming if left unmanaged. Regular observations of greenhouse gas fluxes in this region are therefore essential to improve predictions of future emissions and inform strategies for mitigating climate change. For this, detecting emission patterns is critical for validating process-based wetland models, which simulate the dynamics of carbon cycling and methane production in these ecosystems. These models rely on understanding the complex interactions between soil moisture, temperature, and vegetation, as well as the role of microbial communities in producing and consuming greenhouse gases. Here, we report on our attempts to infer methane fluxes from that region and compare our in-situ and remote sensing observations against the results of the online-coupled one-time nested global and regional chemistry-climate model MECO(n). The regional model domain was set up for CoMet 2.0 Arctic with a 50 km (0.44 x 0.44°) grid over North America. The simulated tracers see methane emissions from different processed-based wetland models, as well as from specific methane sources (e.g. anthropogenic, biomass burning, wetlands). In the future, improved observation capabilities from satellites are foreseen, including the German-French MEthane Remote sensing Lidar missioN (MERLIN), which offers a promising tool for monitoring CH4 gradients over large spatial scales particularly at high latitudes since it does not depend on sun illumination and is much less affected by clouds and aerosol compared to passive remote sensing. The results shown here are therefore a step forward for the utilization of upcoming satellite data to understand the greenhouse gas cycles e.g. at high northern latitudes.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Evaluating the impact of phenological shifts on gross primary productivity across Europe

Authors: Getachew Mehabie Mulualem, Dr. Jadu Dash
Affiliations: School of Geography and Environment, University of Southampton
Phenology, the study of the timing of biological events, plays a critical role in understanding ecosystem productivity and carbon fluxes. Changes in climatic variables such as temperature and precipitation impact the timing of these phenological events, affecting overall productivity during the growing season. In the northern high latitudes, increased temperatures have led to earlier spring and delayed autumn events. However, the impact of these phenological shifts on vegetation carbon dynamics remains less understood. Therefore, this study utilizes data from 57 flux towers within the ICOS network from 2017 to 2023 (7 years) to provide detailed insights into the relationship between the length of the growing season and Gross Primary Productivity (GPP). Discrete Fourier Transform analysis was applied to smooth the data and minimize noise. The start of the growing season (SOS) was identified at the 25th percentile slope on the greening curve, while the end of the growing season (EOS) was determined at the 75th percentile slope on the senescing curve for each site. GPP values across the growing season were integrated to provide annual GPP for each site. The analysis showed a significant positive correlation (r=0.54) between annual GPP and the length of the growing season. However, this relationship varies per landcover type, amongst natural vegetation Evergreen Needleleaf Forests (ENF) showed the strongest positive correlation (r=0.74) and Deciduous Broadleaf Forests (DBF) had the weakest positive correlation (r=0.60). Inter-annual variations in phenology metrics varied across different land cover types. Over the last decade, the start of the growing season (SOS) for Grassland (GRA) stations advanced by 2.9 days, while the end of the growing season (EOS) advanced by 12.8 days per decade. In DBF, the SOS was delayed by 17.6 days and the EOS by 20.8 days per decade. For ENF, the SOS advanced by 11.5 days, whereas the EOS advanced by only 0.8 days per decade. All trends were statistically significant at the 0.05 level. Annual GPP in GRA and ENF decreased significantly, which is consistent with the shortening of the growing season. Conversely, in DBF, the annual GPP increased, aligning with the lengthening of the growing season. Across all sites, annual GPP was found to be negatively correlated with SOS and positively correlated with EOS and LOS. However, the inter-annual variation of GPP was inconsistent across different land covers, with an overall significant increase of 8.77% in GPP observed. While advanced SOS, delayed EOS, and extended LOS are often attributed to climate warming, the relationship between phenology shift and GPP is complex and influenced by several environmental factors. Future activity will explore these relationships at a regional scale and incorporate additional environmental variables to better understand the intricate interplay between phenology and carbon dynamics.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: CarboCatch: assessing tree biomass carbon using remote sensing and machine learning in an interactive platform

Authors: Liv Toonen, Ramadhan -, Waas Thissen
Affiliations: Space4good, Louis Bolk Instituut
Keywords: agroforestry monitoring, aboveground biomass, carbon sequestration, remote sensing, satellite monitoring, LiDAR, fieldwork, data integration, MRV, artificial intelligence, random-forest, allometric equation, tree detection, suitability map, online platform The high costs of monitoring, reporting, and verification (MRV) pose a significant challenge to the profitability of carbon credits in agroforestry systems. Traditional MRV methods rely heavily on extensive manual tree measurements, creating a substantial burden for agroforestry farmers and project developers. Given that carbon farming in agroforestry is a relatively new practice, with monitoring methodologies differing from forestry-based carbon farming, research was necessary to assess the value of remote sensing-based models and determine the most effective approaches. CarboCatch was developed to address this challenge, leveraging remote sensing technologies and artificial intelligence (AI) trained on ground-truth data to automate biomass and carbon stock estimation, with the potential to reduce or replace costly fieldwork. CarboCatch integrates multiple datasets to improve the accuracy of its machine learning (ML) models. Among these, freely available LiDAR-derived canopy height models (CHM) from the Actueel Hoogtebestand Nederland (AHN) provide detailed structural information about tree heights and biomass distribution. High-resolution multispectral imagery, capturing spectral bands such as red, green, red edge, and near-infrared (NIR), adds critical insights into vegetation characteristics. Ground-truth measurements collected on five walnut agroforestry farms in the Netherlands by the Louis Bolk Institute were used to label the remote sensing data with biomass values. These measurements included tree diameter at breast height (DBH), tree height, age, and species, ensuring the integration of accurate field data with remote sensing datasets. The development of CarboCatch involved an iterative modeling process to identify the most effective configurations for agroforestry applications. Various combinations of model features were tested, such as CHM and multispectral image bands. The performance of different machine learning algorithms, including random forests and other regression models, was evaluated. Experiments with train:test data splits optimized model validation, while diverse tree species, soil types, and regions were included to improve scalability. Techniques for filtering outliers ensured data quality and model reliability. Comparisons between original and pan-sharpened resolutions revealed that higher-resolution imagery often introduced noise, adversely affecting model performance. This systematic approach allowed the team to refine models and identify the best-performing configurations. The final models were validated using testing datasets, with predictions compared to field-based above-ground biomass (AGB) measurements. Metrics such as R² and mean absolute error (MAE) demonstrated the reliability of the models. Models integrating LiDAR-derived CHM and multispectral bands performed particularly well, achieving R² values exceeding 0.90 for plot-specific datasets. However, soil-specific models, such as those distinguishing clayey versus sandy soils, showed better generalization across landscapes. These soil-specific models achieved R² values of 0.80 (sandy) and 0.88 (clayey), with MAE values of 3.68 Ton/Ha and 1.53 Ton/Ha, respectively. Interestingly, models using 1.2-meter resolution imagery outperformed those with 0.3-meter pan-sharpened imagery, which was prone to introducing reflectance distortions and noise. The results underline the potential of integrating diverse datasets, including LiDAR, multispectral imagery, and field data, for scalable and cost-effective MRV in agroforestry systems. However, further work is necessary to enhance the applicability and usability of CarboCatch. Expanding training datasets to include additional field plots from diverse regions and soil types across the Netherlands is a priority. Certification of the methodology by SNK will ensure alignment with official standards, making the platform more accessible to project developers. Planned platform enhancements include personalized dashboards that restrict access to managed plots, protecting user privacy while providing tailored insights. Advanced modeling features, such as survival rate visualizations and species recognition algorithms, will further improve usability. CarboCatch demonstrates the transformative potential of AI-driven solutions for reducing MRV costs while maintaining high accuracy. By integrating cutting-edge remote sensing technologies with robust field data, CarboCatch is positioned to revolutionize carbon sequestration monitoring in agroforestry systems, paving the way for scalable and profitable carbon farming.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: The Greenhouse gas Emissions Monitoring network to Inform Net-zero Initiatives for the UK (GEMINI-UK): a new national capability for ground-based remote sensing of greenhouse gases

Authors: Neil Humpage, Paul Palmer, Alex Kurganskiy, Liang Feng, Jerome Woodwark, Will Morrison, Douglas Finch, Stamatia Doniki, Damien Weidmann, Robbie Ramsay
Affiliations: National Centre for Earth Observation, University Of Leicester, National Centre for Earth Observation, University of Edinburgh, University of Edinburgh, RAL Space, Rutherford Appleton Laboratory, NERC Field Spectroscopy Facility, University of Edinburgh
The UK has a long-term goal in place to achieve net-zero greenhouse gas (GHG) emissions by 2050. As part of the UK Greenhouse gas Emissions Measurement Modelling Advancement programme (GEMMA), which aims to provide timely, frequent, and open emissions data to inform progress towards this target, the National Centre for Earth Observation have set up a network of ground-based shortwave infrared spectrometers around the UK. This new network, called GEMINI-UK (Greenhouse gas Emissions Monitoring network to Inform Net-zero Initiatives for the UK), will provide continuous observations of the column concentrations of carbon dioxide and methane during cloud-free conditions. The motivation for this network is to provide data that will, along with in-situ measurements collected by the UK's existing tall tower network, help quantify regional net GHG emissions across the country. Together, these data will form the backbone of a pre-operational GHG emissions monitoring framework for the UK. Through the GEMMA programme, data from GEMINI-UK will be used in a Bayesian inversion framework to constrain regional flux estimates of carbon dioxide and methane. We have designed the measurement network to deliver the biggest uncertainty reductions in carbon dioxide flux estimates, working closely with host partners that include UK universities, a research institute and a secondary school to promote the open access and transparency of the collected data. The network comprises ten new Bruker EM27/SUN spectrometers, which we operate in automated weatherproof enclosures using a design developed by University of Edinburgh researchers, allowing year-round autonomous observations across multiple sites. A further two EM27/SUNs located in London, operated by the NERC Field Spectroscopy Facility, are also contributing data to GEMINI-UK In this presentation we describe the status, network design, first data, and longer-term goals of GEMINI-UK, including an ongoing evaluation of the GEMINI-UK station located alongside a Bruker IFS 120/5 HR TCCON (Total Carbon Column Observing Network) spectrometer at the Rutherford Appleton Laboratory in Harwell, Oxfordshire. We also highlight the opportunities that GEMINI-UK will provide for validation of existing and future greenhouse gas observing satellite missions including S5P TROPOMI, Microcarb, and CO2M.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 1.61/1.62)

Presentation: Resilience of forests across species: recovery curves for tree cover and biomass in France

Authors: Agnès Pellissier-Tanon, Ibrahim Fayad, Fajwel Fogel, Martin Schwartz, Yidi Xu, François Ritter, Dr. Philippe Ciais
Affiliations: Laboratoire des Sciences du Climat et de l’Environnement, LSCE/IPSL, CEA-CNRS-UVSQ, Department of Geosciences and Natural Resource Management, University of Copenhagen
Forests face a growing number of natural and human-induced perturbations, including wildfires, storms, pests, deforestation, and land-use changes, all of which are exacerbated by climate change [1,2]. These disturbances threaten forest health and their ability to act as critical carbon sinks, making regeneration efforts essential [3]. Ensuring the recovery and resilience of forests is not only vital for maintaining biodiversity and ecosystem services but also for mitigating climate change by restoring their capacity to sequester carbon effectively [4]. Accurately quantifying forest contributions to the carbon cycle requires detailed monitoring of growth, recovery, and carbon sequestration dynamics [5,6]. However, the best available global estimates of carbon removal from natural forest regrowth do not sufficiently capture the spatial and temporal variability [7]. The Intergovernmental Panel on Climate Change (IPCC) provides standard carbon removal rates for only two time periods - young (< 20 years) and older (20 to 100 years) secondary forests - and only at the continental and ecozone scales [8]. Some studies have concentrated on the impact of forest functional type on growth [9,10]. However, these studies utilise field data, which only partially represents the full range of forest age, species and environment (climate and soil) [7,10,11]. To address this limitation, satellite data provides maps of forest metrics such as height, biomass [12–15] and forest history dating back to the 1980s [16]. Consequently, they offer a substantial source of continuous spatial information to study secondary forest growth [17,18]. This study presents secondary growth curves for three key forest variables: tree cover, height, and AGB in France. These curves, differentiated by species, reveal distinct growth rates for various age classes and tree species, emphasizing the varied dynamics of forest regrowth. By analyzing a variety of forest metrics, we identify the extent to which the time required to close the canopy differs from the time required for biomass generation. This allows for the extraction of several measures of resilience according to the level of recovery under study. In order to elucidate the sources of variability in recovery curves, an investigation is conducted into the potential influence of environmental factors, including NDVI, soil characteristics, and climate. This approach establishes a link between forest growth dynamics and the underlying ecological drivers, thereby enhancing the interpretability and predictive power of recovery models. Furthermore, the study estimates the maximum potential AGB for different regions and species, mapping the deviation of current forest states from this theoretical maximum. This gridded recovery analysis provides a spatially explicit understanding of forest recovery potential, enabling more effective management strategies, particularly with regard to tree species for enhancing carbon sequestration. Finally, using a bookkeeping approach, we estimate C losses and gains following disturbances in the last decades and compare the results with those of the French NFI. The findings provide a comprehensive framework for the refinement of carbon budgeting and the enhancement of understanding with regard to forest resilience at the species and regional levels. This, in turn, facilitates the development of more informed and sustainable forest management practices. [1] Forzieri G, Dakos V, McDowell NG, Ramdane A, Cescatti A. Emerging signals of declining forest resilience under climate change. Nature 2022;608:534–9. https://doi.org/10.1038/s41586-022-04959-9. [2] Cerioni M, Brabec M, Bače R, Bāders E, Bončina A, Brůna J, et al. Recovery and resilience of European temperate forests after large and severe disturbances. Glob Change Biol 2024;30:e17159. https://doi.org/10.1111/gcb.17159. [3] Puhlick JJ, Weiskittel AR, Kenefic LS, Woodall CW, Fernandez IJ. Strategies for enhancing long-term carbon sequestration in mixed-species, naturally regenerated Northern temperate forests. Carbon Manag 2020;11:381–97. https://doi.org/10.1080/17583004.2020.1795599. [4] Lewis SL, Wheeler CE, Mitchard ETA, Koch A. Restoring natural forests is the best way to remove atmospheric carbon. Nature 2019;568:25–8. https://doi.org/10.1038/d41586-019-01026-8. [5] Zhu K, Zhang J, Niu S, Chu C, Luo Y. Limits to growth of forest biomass carbon sink under climate change. Nat Commun 2018;9:2709. https://doi.org/10.1038/s41467-018-05132-5. [6] Bukoski JJ, Cook-Patton SC, Melikov C, Ban H, Chen JL, Goldman ED, et al. Rates and drivers of aboveground carbon accumulation in global monoculture plantation forests. Nat Commun 2022;13:4206. https://doi.org/10.1038/s41467-022-31380-7. [7] Cook-Patton SC, Leavitt SM, Gibbs D, Harris NL, Lister K, Anderson-Teixeira KJ, et al. Mapping carbon accumulation potential from global natural forest regrowth. Nature 2020;585:545–50. https://doi.org/10.1038/s41586-020-2686-x. [8] Requena Suarez D, Rozendaal DM, De Sy V, Phillips OL, Alvarez‐Dávila E, Anderson‐Teixeira K, et al. Estimating aboveground net biomass change for tropical and subtropical forests: Refinement of IPCC default rates using forest plot data. Glob Change Biol 2019;25:3609–24. [9] Robinson N, Drever R, Gibbs D, Lister K, Esquivel-Muelbert A, Heinrich V, et al. Protect young secondary forests for optimum carbon removal 2024. https://doi.org/10.21203/rs.3.rs-4659226/v1. [10] Chen X, Luo M, Larjavaara M. Effects of climate and plant functional types on forest above-ground biomass accumulation. Carbon Balance Manag 2023;18:5. https://doi.org/10.1186/s13021-023-00225-1. [11] Heinrich VHA, Dalagnol R, Cassol HLG, Rosan TM, de Almeida CT, Silva Junior CHL, et al. Large carbon sink potential of secondary forests in the Brazilian Amazon to mitigate climate change. Nat Commun 2021;12:1785. https://doi.org/10.1038/s41467-021-22050-1. [12] Schwartz M, Ciais P, De Truchis A, Chave J, Ottlé C, Vega C, et al. FORMS: Forest Multiple Source height, wood volume, and biomass maps in France at 10 to 30 m resolution based on Sentinel-1, Sentinel-2, and GEDI data with a deep learning approach. Earth Syst Sci Data Discuss 2023:1–28. https://doi.org/10.5194/essd-2023-196. [13] Liu S, Brandt M, Nord-Larsen T, Chave J, Reiner F, Lang N, et al. The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe. Sci Adv 2023;9:eadh4097. https://doi.org/10.1126/sciadv.adh4097. [14] Lang N, Jetz W, Schindler K, Wegner JD. A high-resolution canopy height model of the Earth. Nat Ecol Evol 2023:1–12. https://doi.org/10.1038/s41559-023-02206-6. [15] Potapov P, Li X, Hernandez-Serna A, Tyukavina A, Hansen MC, Kommareddy A, et al. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens Environ 2021;253:112165. https://doi.org/10.1016/j.rse.2020.112165. [16] Viana-Soto A, Senf C. The European Forest Disturbance Atlas: a forest disturbance monitoring system using the Landsat archive. Earth Syst Sci Data Discuss 2024:1–42. https://doi.org/10.5194/essd-2024-361. [17] Xu Y, Ciais P, Li W, Saatchi S, Santoro M, Cescatti A, et al. Biomass recovery after fires dominates the carbon sink of boreal forests over the last three decades 2023:EGU-9695. https://doi.org/10.5194/egusphere-egu23-9695. [18] Pellissier-Tanon A, Ciais P, Schwartz M, Fayad I, Xu Y, Ritter F, et al. Combining satellite images with national forest inventory measurements for monitoring post-disturbance forest height growth. Front Remote Sens 2024;5. https://doi.org/10.3389/frsen.2024.1432577.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Room 0.14)

Session: F.05.09 Case Studies on the Economic Impacts of Earth Observation

Earth observation (EO) data and solutions provide benefits for commercial and governmental end-users across a wide range of key sectors such as insurance, finance, energy, climate, forestry, agriculture, mining etc. Several studies have been conducted to identify the economic, strategic and environmental impacts of EO. However, due to the open nature of data (e.g. from the Copernicus programme), the economic impacts of EO are most likely underestimated.

This session will feature interactive panel discussions with stakeholders across both sides of the EO value chain - EO data and solution providers as well as EO end-users who will share case studies on the current value of EO for their organisations, along with an outlook on how EO is set to transform their businesses in the future.

Speakers:


  • Aravind Ravichandran - founder of Terrawatchspace
  • Geoff Sawyer - Strategic Advisor to the EARSC Board
  • Grinson George Padinjakara ARS - Director ICAR-Central Marine Fisheries Research Institute
  • Gopal Erinjippurath - CTO at SustGlobal
  • David Fernandes - Head of Geospatial Unit at EDP
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 4

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Mapping Sahelian Agricultural Landscapes

Authors: Altaaf Mechiche Alami
Affiliations: Center for Sustainability Studies, Lund University
Transforming agricultural systems to produce more affordable and nutritious foods in a sustainable manner while being climate resilient is one of the main challenges the world is facing today. Concepts of sustainable intensification, climate smart agriculture, Sustainable Land Management (SLM) - including agroecology - are increasingly being promoted in order to achieve this goal. However, the way such strategies and policies are being implemented, as well as their broader socio-ecological impacts on the ground have been hard to track globally. Agricultural monitoring and yield estimation become particularly challenging in data scare regions such as the Sahel, where agriculture forms complex systems with high spatial heterogeneity and strong dependence and vulnerability to climatic variability. Senegal placed agricultural transformation at the heart of its economic growth strategy (The Emerging Senegal Plan since 2013) by establishing agricultural growth poles (agropoles), promoting private sector investments and supporting smallholders in accessing organic inputs, improved seeds and irrigation to reform selected value chains and improve nutritional intake, achieve self-sufficiency and increase incomes. The country has also implemented SLM practices in the context of the Great Green Wall initiative since its inception in 2008 with a strong emphasis on land restoration and agroforestry project. This research aims to inventory the resulting changes in Senegal's agricultural landscapes over the past decade. It considers changing management practices (rotations, fallows, irrigation, fertilizer use and SLM) and their potential environmental and livelihood impacts. This is done by building on approaches used in Agro-Ecological, Livelihood and Socio-Ecological Land System zoning, and utilizing various data sources from satellite imagery, climate products, national statistics and agricultural census to complement a small in-situ dataset (JECAM). The project experiments with the use of various Machine Learning algorithms (supervised and unsupervised) and satellite-based features in an attempt to characterize agricultural landscape composition, vegetation-climate interactions and management practices. Output annual maps inform on the location, scale and pace of agricultural transformations from intensification, expansion and use of SLM. Such information is not only necessary for agricultural monitoring but also serves as a basis for analyzing food and nutrition security and evaluating farming systems’ response and resilience to climatic changes. They also have the potential of improving the representation of farming systems in Dynamic Global Vegetation Models, enabling the evaluation of management practices as well as climate impacts on yields and ecosystem health (carbon and nitrous oxide emissions and nitrogen leaching).
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: EO4Nutri: Remote Sensing for nutrient estimation and sustainable crop monitoring

Authors: Mariana Belgiu, Associate Professor Michael Marshall, Dr. Gabriele Candiani, Dr. Francesco Nutini, Dr. Monica Pepe, Dr. Mirco Boschetti, Associate Professor Micol Rossini, Dr. Luigi Vignali, Chiara Ferrè, Dr. Cinzia Panigada, Prof. Tobias Hank, Dr. Stefanie Steinhauser, Dr. Stephan Haefele, Professor Murray Lark, Dr. Alice Milne, Dr. Grace Kangara, Kwasi Appiah-Gyimah Ofori-Karikari, Professor Alfred Stein, Dr. Raian Vargas Maretto, Associate Professor Chris Hecker, Prof Andy Nelson
Affiliations: Faculty of Geo-information Science and Earth Observation (ITC), University Of Twente, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), Department of Earth and Environmental Sciences (DISAT), University of Milano-Bicocca, Ludwig Maximilian University of Munich, Rothamsted Research, University of Nottingham
The EAT-Lancet Commission on Food, Planet, and Health developed the so-called planetary health “plate” in an attempt to address the question: "Can we provide a healthy diet for a future population of 10 billion people within planetary boundaries?". The commission emphasizes the critical link between dietary choices and climate change, advocating for sustainable food systems to mitigate environmental impacts. The planetary health plate is divided evenly between vegetables and fruits on one side and grains, plant protein sources, unsaturated plant oils, and limited amounts of animal protein sources on the other. This proposed diet aims to meet the global population's required calorie and nutrient intake. The quantity of agricultural yield is regularly measured and monitored at various geographic and temporal scales. Unfortunately, the quality of agricultural yield has received less attention. To realize the vision of the planetary health plate, it's essential to develop, test, and implement effective solutions for evaluating crop nutrient levels. Information on crop nutrients is critical for identifying potential deficiencies that may lead to micronutrient deficiency aka hidden hunger. This form of malnutrition is associated with serious health issues, including impaired physical and mental development, premature death, immune dysfunction, and reduced learning capacity. It affects more than 3 billion people around the world. Conventional methods for measuring the nutrient levels typically consist of collecting grains at the maturity phase and performing a wet chemical analysis in the laboratory. Unfortunately, this method is time-consuming, destructive, and cost-prohibitive and, consequently, is not suitable for consistent quantification of nutrients across large spatial extents and across time. EO4Nutri project is focusing on addressing this challenge. Specifically, it aims at advancing our understanding of the lifecycle of nutrients from the soil to crop canopy and further to crop grains with data-driven and remotely sensed data. Target nutrients are those with high relevance to human nutrition and plant growth, namely Calcium (Ca), Iron (Fe), Magnesium (Mg), Nitrogen (N), Phosphorus (P), Potassium (K), Selenium (Se), Sulphur (S), and Zinc (Zn). The target crops are maize, rice and wheat. The EO4Nutri team conducted extensive measurements at the Jolanda di Savoia farm, operated by Bonifiche Ferraresi S.p.A. in the Po Valley (Emilia-Romagna region), over two growing seasons (2022/2023 and 2023/2024). The collected data included soil samples, pre-sowing soil and plant spectral measurements, plant biophysical parameters, and plant and grain samples. Proximal spectral measurements were taken with a handheld spectrometer during the vegetative, reproductive, and maturity stages of each crop in both growing seasons. In addition to these measurements, PRISMA and EnMAP satellite images were acquired at the three key growth stages. Important biophysical parameters, such as Leaf Area Index (LAI) and Leaf Chlorophyll Content (LCC), were measured during the vegetative and reproductive stages and biomass samples were collected and fresh weighted. At the maturity stage, total fresh and dry biomass, along with yield, were recorded. Laboratory analysis was performed to determine nutrient content in plants and grains. We developed Partial Least Square Regression (PLSR) models for each nutrient, crop, and growth stage using proximal spectral measurements. The models yielded promising results (R² > 0.5) for Mg, Zn, P, S, and N across all growth stages, whereas Ca, Fe, K, and Se were accurately predicted only during the vegetative stage. Two-band vegetation indices (TBVIs) were also utilized to explore the relationship between plant and grain nutrients and various vegetation indices derived from PRISMA, EnMap, and Sentinel-2. Consistent with proximal spectral measurements, promising predictions were achieved for Mg, Zn, P, S, N, K, and Ca across all growth stages, whereas Fe and Se predictions were less accurate. The most relevant spectral bands for estimating target nutrients were in the shortwave infrared (SWIR) and near-infrared (NIR) ranges, with the optimal band combinations varying by growth stage, crop, and nutrient. The EO4Nutri project demonstrates the potential of integrating spectral analysis and remote sensing to accurately predict critical crop nutrients, providing a scalable solution to support sustainable agriculture.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Agricultural Drought Monitoring in the Marchfeld Region Using Sentinel-2 Imagery and Deep Learning

Authors: Omid Ghorbanzadeh, Francesco
Affiliations: BOKU
Agricultural drought is defined by water deficits relative to crop needs and worsened by climate change, which poses severe challenges to agriculture and water resources and consequently the economy, and the environment. A drought can lead to yield losses of up to 90% in crops like maize, and irrigation is necessary in most agricultural fields to deal with drought adverse impacts. The Marchfeld region in Lower Austria exemplifies these challenges as this region has a semi-arid climate unique pedo-climatic conditions and an important agricultural area for food production in Austria. From May to September, annual rainfall averages only 250–300 mm. This often results in water shortages in summer and requires increased irrigation. However, groundwater in this region is shared with the urban and industrial sectors, which complicates water management. Earth Observation (EO) data, such as Sentinel-2 images, are used for drought monitoring by assessing information from crop development. Integration of Sentinel-2 imaging spectrometry with advanced techniques offers solutions to assess plant water status and monitor drought conditions. However, traditional models for tracking drought often rely on single indicators like soil moisture or vegetation, which limits their ability to capture drought complexity. To address this, models integrate multiple indices, such as NDVI, NDWI, and EVI, alongside approaches like the Scaled Drought Condition Index (SDCI) to enhance drought monitoring. The current advances in machine/deep learning (ML/DL) algorithms improved drought monitoring by incorporating more indicators, such as precipitation and vegetation, offering sophisticated analysis of EO data. In this study, Sentinel-2 satellite data and ancillary datasets will be combined with ML/DL techniques to enhance drought intensity mapping in maize fields in the Marchfeld region. We developed two DL algorithms namely a Deep Neural Network (DNN) and a One-dimensional Convolutional Neural Network (1D CNN). The performance of the developed DL algorithms was compared to that of common ML algorithms like Random Forest (RF) that optimized for this task. The research integrates complementary datasets, such as climate and soil data, with nearly 30 derived indices from time-series Sentinel-2 imagery as input data for our supervised algorithms. The study applies and compares these models to predict drought, evaluating their accuracy and performance for agricultural drought monitoring in the Marchfeld region. Key indices and hyperparameters were identified based on their effectiveness in drought prediction for maize fields. Finally, the research generates detailed drought maps for the growing season. They provide predictions on pixel-level drought probabilities as accurate assessments of drought conditions. The findings reinforce the effectiveness of Sentinel-2 images in detecting and assessing drought stress, supporting its applicability in drought monitoring. Among the tested models, DNN achieved the highest accuracy, followed by CNN and RF, proving effective for detecting and assessing drought stress.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: From Field Samples to Production Estimates: Evaluating Yield Estimation Models for Sub-National Statistics

Authors: Pierre Houdmont, Sophie Bontemps, Pierre Defourny
Affiliations: Université Catholique de Louvain - Earth And Life Institute
Estimating agricultural production is crucial for decision-makers and policymakers, particularly in managing stock importation, exportation, and international aid. This is especially important for ensuring food security in developing countries, where agricultural statistics are often unreliable. The era of satellite imagery has significantly improved the accuracy of crop acreage estimation, and numerous studies indicate that it has also enhanced yield estimation at subnational levels, making possible a global improvement in the estimation of agricultural production. Sen4Stat is an open-source toolbox offering various modules to support National Statistical Offices (NSOs) in improving their agricultural statistics using models that incorporate satellite Earth observation data. Two specific modules have been developed to ensure the most accurate yield estimation based on the data collected by the NSOs. The first module (Parcel-level model) relies on georeferenced yield measurement data for model calibration. Obtaining such data requires costly field measurement campaigns. The second module (Regional model) is designed for NSOs that are unable to carry out these campaigns. It uses the NSOs' historical estimates at sub-national levels, such as district or region, to calibrate the models. For comparison and evaluation purposes of the two yield modules proposed in Sen4Stat, a study was conducted on the estimation of wheat in France across 41 departments over 5 years (2017-2021). Calibration datasets for the parcel-level module were derived from yield data at the field and farm levels, provided by farmers as part of surveys for French national agricultural statistics. For the regional-level module, the calibration datasets were directly obtained from French national agricultural statistics. Both modules rely on machine learning regressors to link the variables of interest with the measured yield as a reference. Yield explanatory variables were defined based on meteorological data, soil moisture, and vegetation proxy variables derived from Earth observation (Sentinel-2). Parcel-level and departmental leaf area index (LAI) time series were smoothed using Savitzky-Golay interpolation. These time series enabled the identification of three phenological periods (vegetative, reproductive, and senescence), which were then used to calculate the variables of interest. We compared the performance of three regression algorithms proposed in the Sen4Stat Yield modules: Random Forest, Support Vector Machine, and multi-linear regression. An ongoing evaluation of gradient boosting is being conducted, as it is expected to outperform Random Forest with a smaller calibration set. Leave-one-year-out validation was performed to assess each year individually. Field-level estimations were aggregated to the department scale for comparison with the regional module's estimations. Preliminary results showed that Random Forest outperformed Support Vector Machine and linear regressions for estimating yield. Both modules demonstrated good temporal transferability over time. At the departmental level, aggregated field-level estimations outperformed the regional module’s estimations, highlighting the potential of using large datasets at a small scale to improve sub-national yield statistics. However, the regional module demonstrated that accurate estimations can still be achieved despite the lack of detailed data. For each year, models from both modules explained more than 75% of the yield variability during the season and achieved high scores using only variables computed before senescence, making them suitable for forecasting.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Enhancing Sustainable Agriculture Through Earth Observation: Insights From the CRISP (Consistent Rice Information for Sustainable Policy) Initiative

Authors: Giaime Origgi, Francesco Holecz, Massimo Barbieri, Luca Gatti, Renaud Mathieu, Emma Quicho, Sushree Sagarika Satapathy, Alessandro Marin, Gaetano Pace, Roberto Di
Affiliations: sarmap sa, IRRI-Africa, CGI Italia, IRRI-Philippines
Consistent Rice Information for Sustainable Policy (CRISP) is a two-year ESA-funded project designed to address Indicator SDG 2.4.1, which measures the proportion of agricultural land under productive and sustainable agriculture. CRISP aims to contribute to achieving sustainable food production systems and resilient agricultural practices by 2030. The project collaborates with key Early Adopters, including AfricaRice, GEOGLAM, GIZ, the Syngenta Foundation, IFAD, WFP, and SRP, to ensure its solutions align with diverse stakeholder needs. The initiative focuses on scaling up advanced and cost-effective Earth Observation (EO) solutions to deliver vital information on seasonal rice planted areas, growing conditions, yield forecasts, and production at harvest. CRISP adopts a user-oriented approach, emphasizing the importance of active involvement from Early Adopters. This collaborative process helps users understand the capabilities and limitations of the proposed solutions, reduces the risk of setting unrealistic expectations, and ensures successful endorsement of the services. User requirements, carefully identified during the needs assessment phase, have been translated into algorithms and workflows tailored to address diverse and complex demands. Central to this effort is the adoption of a multi-mission EO strategy, leveraging existing operational rice area-yield services such as RIICE (Remote Sensing-based Information and Insurance for Crops in Emerging Economies). By integrating data from multi-mission EO systems, including Sentinel-1 and Sentinel-2, the solution provides flexibility and adaptability to meet a variety of user demands. Its robustness was tested across five distinct sites in South-East Asia, India, and Africa, demonstrating its ability to cater to the heterogeneous needs of stakeholders in different contexts. To validate the proposed solution, CRISP carried out use-case demonstrations across various test sites, each representing different scenarios and challenges. Specifically, CRISP addressed the following: - Evaluating the impact of drought on yield in the Luzon region (Philippines). - Generating yield maps and Start-of-Season (SoS) information for irrigated areas in the Senegal River Valley (Senegal). - Estimating yield loss following a flood event in Andhra Pradesh (India). - Providing yield maps and SoS information in the largest irrigated area of Mwea (Kenya). - Estimating yield in rainfed systems in the Kano region (Nigeria). In its final phase, CRISP focuses on addressing the need for an operational service by integrating its solutions into an end-to-end processing chain deployed on a cloud computing infrastructure. This infrastructure is optimized for scalability and equipped with advanced analytical tools. Additionally, the project includes knowledge transfer activities through the organization of dedicated Living Labs. These labs aim to demonstrate the platform's potential while enabling users to familiarize themselves with the technology and its usability, fostering long-term adoption and impact.
Add to Google Calendar

Tuesday 24 June 11:30 - 13:00 (Hall F2)

Presentation: Early detection of soil salinization by means of EnMAP hyperspectral imagery and laboratory spectroscopy

Authors: Giacomo Lazzeri, Dr. Robert Milewsky, Dr. Saskia Foerster, Prof. Sandro Moretti, Prof. Sabine Chabrillat
Affiliations: University of Florence, Department of Earth Sciences, Helmholtz Center Potsdam GFZ German Research Centre for Geosciences, Umweltbundesamt (UBA) - German Environment Agency, Institute of Soil Science, Leibniz University Hannover
Soil salinization is the build up of soluble salts in the topsoil, measured as Electrical Conductivity (EC – dS/m). Progressively Increasing concentrations of salt in soil leads to crop productivity decreases and ultimately, soil sterility. In a global perspective, food production is predicted to increase by 62% by 2050, while soil salinization has increased by 16.8% in the period 1986-2016, posing a serious threat to the future of soil health and food production. Salt affected soils present complex spectral characteristics, with limited absorption features and strong modifications of surface reflectance. The entity and magnitude of spectral modifications is function of salt concentrations. The available literature shows how, successful salinity detection applications rely on very high salt concentrations (9.80 dS/m) to maximize salt spectral evidences and detection capabilities. With EnMAP deployment, its unprecedented radiometric and spectral characteristics have opened new possibilities for the detection of salt related spectral modifications. Therefore, we investigated EnMAP’s detection capabilities for low levels of salinization, corresponding to the early stages of the phenomenon. To compare the prediction performance of spaceborne derived models, we adopted laboratory derived spectra modelling results as a benchmark. The area under analysis is located in central Italy, Tuscany region, in the province of Grosseto. Extensive agriculture combined with an evapotranspiration to precipitation ratio deficit of -400 mm resulted in an overexploitation of the groundwater reservoir. The area proximity to the coast and the numerous channels allow for seawater intrusion during storm surges, contributing, with other geological sources to the total cations and anions budget. We conducted field acquisitions at the apex of the dry season, in September 2023, to maximize the probability of surface salt efflorescence occurrence. Field samples were collected within 3 days from the EnMAP acquisition, with no rainfall occurrence. Soil samples were processed according to FAO (FAO. 2021. Standard operating procedure for soil electrical conductivity, soil/water, 1:5. Rome.) salinity assessment guidelines and EC was measured. For the same field samples, we acquired laboratory spectra according to the procedure described by Gholizadeh (Gholizadeh, A., Neumann, C., Chabrillat, S., van Wesemael, B., Castaldi, F., Borůvka, L., Sanderman, J., Klement, A., & Hohmann, C. (2021). Soil organic carbon estimation using VNIR–SWIR spectroscopy: The effect of multiple sensors and scanning conditions. Soil and Tillage Research, 211, 105017). Concomitantly, EnMAP image spectra were extracted for the location of the collected field samples. Both laboratory and EnMAP derived spectra were tested to define the best preprocessing – regression algorithm combination for salinization detection. The preprocessing methods tested included Savitzky-Golay filters, continuum removal, PCA and Norris gap derivatives. Similarly, the regression models used were PLSR, 2D Correlograms and a hyperparameter tuned Random Forest Regressor. Model results for laboratory derived spectra were considered as reference for maximum model prediction capability, allowing us to assess the satellite derived model predictions. Among the models tested, the correlogram derived best band – index combination resulted in a R2 of 0.88 for laboratory data and 0.63 for EnMAP data. PLSR proved to have the worst performance on both datasets. Random Forest Regressor proved its capability in detecting complex spectral features, with R2 scores of 0.72 for laboratory data and 0.60 for EnMAP. Considering only the EnMAP derived spectra, the best correlogram derived index, when applied to the whole spaceborne image resulted in a poor generalization of the salinity spatial extent and concentration. Differently, the trained Random Forest Regressor upon deployment on the whole image was able to capture the spatial variability of the phenomenon, with concentration value predictions in accordance with field observations and expert knowledge. Overall, the results testify EnMAP data quality. Similar statistical performances of the models tested validates our hypotheses on the feasibility of spaceborne early detection for topsoil salinization. Considering the research outlook, the high spatial and temporal variability of salinity requires extensive field sampling efforts to capture the condition of the phenomenon. Consequently, validation of the predictions with new field evidences represents a challenge we will focus and tackle in future research developments. In addition, the acquisition of new and numerous salt affected soil spectra could be beneficial to increase the model prediction and generalization capabilities, potentially allowing a transition from a site-calibrated model to a model capable of generalizing dynamics across previously unseen areas.
Add to Google Calendar

Tuesday 24 June 11:37 - 11:57 (EO Arena)

Demo: D.04.31 DEMO - NoR Updates and Road Map - session 2

This session will showcase the ESA Network of Resources initiative current status, major updates, and road map.
Add to Google Calendar

Tuesday 24 June 12:00 - 12:20 (EO Arena)

Demo: D.04.15 DEMO - Dunia: an all-in-one processing and dissemination platform for EO data over Africa

Dunia stands as a comprehensive platform designed for Africans, welcoming both beginners and experts in Earth observation. Focused exclusively on the African continent, it empowers users to discover, build, and exchange valuable geospatial insights across Africa. Those interested can start with a free trial, diving into a web map browser and an innovative streaming solution that highlights the immense potential of Earth observation data. As users engage with the platform, they find a dynamic development environment where they can create tailored solutions for large-scale data processing. This capability fosters creativity and innovation, allowing individuals to transform data into impactful applications. At the same time, Dunia encourages collaboration within the African Earth observation community, offering a vibrant marketplace for data and applications. Discover, build and exchange yourself at https://dunia.esa.int.
During the session we will dive into all three core elements of Dunia. We will discover streamable datasets, look at example jupyter notebooks in the Dunia Sandbox, build own workflows in the Dunia Application Hub and offer them in the Dunia Marketplace to the African EO community.

Speakers:


  • Johannes Schmid - IT Service and Operations Manager, GeoVille Information Systems and Data Processing GmbH
Add to Google Calendar

Tuesday 24 June 12:22 - 12:42 (EO Arena)

Demo: C.03.21 DEMO - SentiBoard: Your Real-Time Window into Copernicus Operations

This session will showcase the Copernicus Operations Dashboard, an online platform designed to provide real-time visibility into the operational status of the CSC-EOF (Copernicus Space Component - Earth Observation Framework). Known as Sentiboard, the dashboard integrates information from across the data acquisition, processing, and dissemination chain, offering users a unified and intuitive view of mission operations.
Through a guided live demonstration, we will explore the main features and navigation structure of the platform, highlighting how it supports monitoring activities and situational awareness. The session will include an overview of the different sections of the dashboard—such as acquisition planned and real, publication statistics, and dissemination status—and demonstrate how to access mission-specific insights and performance indicators.
The goal is to show how Sentiboard translates complex operational data into accessible and actionable information. Whether you're involved in satellite operations, mission planning, performance analysis, or simply interested in the infrastructure behind Copernicus data delivery, this session will offer a clear and engaging introduction to the tool.
Attendees will leave with a practical understanding of how to:
• Navigate the dashboard efficiently
• Interpret key visual indicators and metrics
• Access up-to-date information about mission activities
Join us to discover how the Copernicus Operations Dashboard enhances transparency and supports informed decision-making across the EO community.

Speakers:


  • Salvatore Tarchini - Serco
  • Daniele Rotella - Serco
  • Alessandra Paciucci - Serco
  • Rosa Fontana - Serco
Add to Google Calendar

Tuesday 24 June 12:45 - 13:05 (EO Arena)

Demo: D.01.15 DEMO - TourismSquare, monitor and anticipate the practicability of tourist activities according to environmental conditions and climate projections

The demonstration session will showcase how TourismSquare integrates satellite Earth observation data to help tourism stakeholders monitor environmental conditions, assess the feasibility of tourist activities, and anticipate climate impacts.
Attendees will explore the user-friendly web interface which provide five key indicators—Human Activity, Air, Biodiversity, Climate, Land, and Water— and a digital twins approach, with simulation capabilities thanks to predictive analytics, enabling data-driven tourism planning and supporting territorial management.
The live demonstrations will illustrate how the tool calculates practicability scores for different tourism activities, optimizes seasonal travel planning, and supports strategic decision-making for local authorities and businesses.
The session will conclude with a Q&A segment, offering participants the opportunity to discuss specific use cases and explore how TourismSquare can be tailored to their region’s tourism needs.

Link to a presentation video: https://youtu.be/sokdaEf2mSE

Speakers:


  • Fabien Castel
Add to Google Calendar

Tuesday 24 June 13:00 - 13:45 (Frontiers Agora)

Session: E.03.05 Shaping the Future of EO: Digital Systems & Disruptive Public-Private Models

This Agorà session invites participants to coalesce around the challenge of designing a Future Earth Observation (EO) mission architecture embracing new space paradigms in terms of fresh perspectives and frameworks that redefine how space exploration, utilization, and governance are approached, together with innovative models that leverage technology, creativity, and collaboration to create a sustainable and accessible space ecosystem.
The goal is to promote a dynamic exchange of ideas to identify the key capabilities and synergies needed to tackle global scientific challenges, environmental sustainability, and cost-effectiveness.
A key theme for the Agorà is the role of digital infrastructure as an enabler of the next-generation EO ecosystem. Ideas around digital twins, cloud-based platforms, and blockchain-driven data traceability will be explored, focusing on their potential to enhance data sharing, transparency, and societal value.
The session will also delve into the transformative potential of Public-Private Partnerships (PPPs) in the realm of EO Missions. By exploring innovative collaboration models, stakeholders can envision how governments and private sector actors might co-create and operationalize EO systems, sharing risks and accelerating the deployment of cutting-edge technologies. Discussions will focus on the mutual benefits of these partnerships, from cost reduction to the rapid adaptation of services to emerging needs.
Through this participatory session, the Agorà seeks to align diverse perspectives, identifying priorities and innovative approaches to realize a collaborative, forward-looking EO architecture that meets both scientific and societal needs.

Moderators:


  • Emmanuel Pajot - EARS

Speakers:


  • Giovanni Sylos Labini - Planetek
  • Dominique Gillieron - ESA
  • Pierre Philippe Mathieu - ESA
  • Francesco Longo - Italian Space Agency (ASI)
  • Maria Santos - University of Zurich

Add to Google Calendar

Tuesday 24 June 13:00 - 13:45 (Nexus Agora)

Session: B.01.01 Amplifying impact through EO integration in international development finance mechanisms

Discover how Earth Observation (EO) technology is inducing and amplifying impact at scale in international development finance mechanisms. For over 15 years, the European Space Agency (ESA) has cultivated strategic partnerships with International Financial Institutions (IFIs) – such as the World Bank and regional development banks – to embed EO as a key contributor in development assistance operations. These collaborations have been bolstered by ESA-funded initiatives, most recently the Global Development Assistance (GDA) programme, which mobilises Europe's collective EO expertise to support global development and climate action – as well as complementary IFI activities, following jointly agreed cooperation principles to align efforts and resources.
Considering the potential of space-based applications to contribute to climate action and sustainable development, this agora will explore experiences and success stories on how integrating EO into financing mechanisms enhances decision-making, drives innovation, and accelerates impact across development activities. Join us to learn how ESA’s and its partners’ efforts are paving the way for scalable, sustainable EO adoption within global development cooperation frameworks.
This agora will highlight impact stories resulting from cooperation activities under ESA’s GDA programme and discuss with partner IFIs on how they take ownership in integrating those EO services to inform their operations and transfer it to their client countries. The discussion will focus on required steps to further foster wide-scale adoption and integration at the country level, in order to maximise socio-economic impact and stimulate growth of local digital economies.

Speakers:


Opening


  • Christoph Aubrecht – ESA, Programme Coordinator Global Development Assistance
  • Rune Floberghagen – ESA, Head of Climate Action, Sustainability and Science Department

Panel


  • Olivier Dupriez - World Bank
  • Eric Quincieu – ADB, Principal Water Resources Specialist
  • Fani Kallianou de Jong - European Bank for Reconstruction and Development
  • Rafael Anta - Interamerican Development Bank 
  • Gladys Morales Guevara, International Fund for Agricultural Development
Add to Google Calendar

Tuesday 24 June 13:00 - 14:30 (ESA Agora)

Session: D.02.14 AI and Earth observation - where to now?

Artificial Intelligence (AI) has entered all areas of society and Earth Observation (EO) is no exception. The insights contained in multi sensor (multimodal) data can complement and empower traditional physical models and work in unison towards solutions that are accurate, explainable and can enhance scientific discovery. The rise of language models in EO also open new opportunities for interaction with users, or to mine the massive data archive with semantics. In this session, members of the Phi-Lab invited professors will discuss the latest advances and engage critically (and provocatively) with the audience in a forward looking discussion about the future at the interface of AI and remote sensing.

Speakers:


  • Konrad Schindler (ETH, Switzerland) and XiaoXiang Zhu (TUM, Germany) : foundation models
  • Gustau Camps-Valls (Universitat de València, Spain) and Mihai Datcu (University POLITEHNICA of Bucharest, Romania) : interpretable AI and causality
  • Fabio del Frate (Università di Tor Vergata, Italy) and Bertrand Le Saux (DG Connect, European Commission) : physics-driven models
  • Devis Tuia (EPFL, Switzerland) Jan van Rijn (Leiden University, the Netherlands) and Nicolas Longepe (ESA) : user-centric AI
Add to Google Calendar

Tuesday 24 June 13:07 - 13:27 (EO Arena)

Demo: D.03.31 DEMO - SNAP in Action - Various Application Examples throught the week demonstrating the power of SNAP for EO data visualisation, analysis and processing - session 2

SNAP is the ESA toolbox for visualising, analysing and processing of optical and microwave EO data. SNAP support a large number of current and past satellite sensors as well as generic data formats. SNAP addresses all kind of users, from early stage students, through experienced researchers, up to production managers responsible for a public and commercial EO processing service.

In a series of demonstrations we showcase this breadth of possibilities at various land and water real life applications. Demonstratoins will be repeated multiple times to allow as many as possible participants to join a specific demonstration. We will tailor the daily programme from a set of prepared demonstrations according to themes of the days, and user needs if expressed during the conference.

The following list give a glimpse of demonstrations from which we can select:
1. Sentinel-1 ETAD processing with SNAP
2. Change Detection Monitoring
3. Supporting new SAR missions with SNAP
4. “Live” fire evolution in Los Angeles using Sentinel-2 image
5. Burned Areas Detection – Mehedinti, Romania
6. Monitoring Drought Evolution – Dobrogea, Romania
7. Water Quality in urban areas at the example of the city of Hamburg
8. Interpreting Hyperspectral Data for coastal habitat mapping

Speakers:


  • Diana Harosa - CS Romania
  • Cosmin Cara - CS Romania
Add to Google Calendar

Tuesday 24 June 13:30 - 13:50 (EO Arena)

Demo: D.04.24 DEMO - Streamlining Snow monitoring with openEO and CDSE

Snow monitoring plays a crucial role in water resource management. The increasing availability of remote sensing data offers significant advantages but also introduces challenges related to data accessibility, processing, and storage. For operational use, scalable workflows are essential to ensure global applicability.
Leveraging a cloud-based platform such as the Copernicus Data Space Ecosystem (CDSE) enables efficient data processing directly where the data are stored, without data download. Our workflows are built using the openEO API, which provides a standardized interface for accessing and processing large Earth observation datasets worldwide.
In this demonstration, we will showcase key applications for snow monitoring. Specifically, we will explore snow and ice cover classification, snow cover fraction downscaling, wet snow detection, and snow albedo estimation. The session will illustrate how different sensors and methodologies can be leveraged to achieve reliable outputs while demonstrating the power and scalability of cloud computing platforms. A particular focus will be placed on how our workflow leverages cloud scalability to reconstruct long-term time series at high spatial resolution—crucial for monitoring snow over large areas and extended periods.
This demo is suited for researchers, practitioners, and decision-makers interested in snow monitoring, as well as those looking to integrate openEO-based workflows into their environmental data processing pipelines. Participants will gain insights into how cloud-based infrastructures streamline large-scale Earth observation analysis.

Valentina Premier1, Riccardo Barella1, Stefaan Lippens2, Emile Sonneveld2, Carlo Marin1, Michele Claus1, Alexander Jacob1, Jeroen Dries2
1Eurac research, Institute for Earth Observation, Bolzano (Italy)
2VITO Remote Sensing, Mol (Belgium)

Speakers:


  • Valentina Premier - EURAC
  • Riccardo Barella - EURAC
Add to Google Calendar

Tuesday 24 June 13:52 - 14:12 (EO Arena)

Demo: D.04.17 DEMO - Interactively visualise your project results in Copernicus Browser in no time

{tag_str}

In this demo, we will demonstrate how to interactively visualize and explore your project results using Copernicus Browser. Copernicus Browser is a frontend application within the Copernicus Data Space Ecosystem, designed to explore, visualize, analyze, and download Earth Observation data.

We will guide you through the necessary steps to prepare your data for ingestion, introduce various services within the Ecosystem one of them to support data ingestion (Bring Your Own COG API), and show you how to configure your data for interactive visualization. This includes setting up a configuration file, writing an Evalscript, and creating a legend.

Finally, we will demonstrate how to visualize and analyze results within Copernicus Browser.

Speakers:


  • Daniel Thiex - Sinergise
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Session: A.01.08 Planetary Boundary Layer from Space

The planetary boundary layer (PBL) plays an essential role in weather and climate, which are critical to human activities. While much information about the temperature and water vapor structure of the atmosphere above the PBL is available from space observations, EO satellites have been less successful in accurately observing PBL temperature and water vapor profiles and in constraining PBL modelling and data assimilation. Improved PBL models and parameterizations would lead to significantly better weather and climate prediction, with large societal benefits.

In the latest US National Academies’ Earth Science Decadal Survey, the PBL was recommended as an incubation targeted observable. In 2021, the NASA PBL Incubation Study Team published a report highlighting the need for a global PBL observing system with a PBL space mission at its core. To solve several of the critical weather and climate PBL science challenges, there is an urgent need for high-resolution and more accurate global observations of PBL water vapor and temperature profiles, and PBL height. These observations are not yet available from space but are within our grasp in the next decade. This can be achieved by investing in optimal combinations of different approaches and technologies. This session welcomes presentations focused on the PBL, from the observational, modeling and data assimilation perspectives. In particular, this session welcomes presentations focused on future EO PBL remote sensing missions and concepts, diverse observational approaches (e.g., active sensing, constellation of passive sensors, hyperspectral measurements, high-altitude pseudo satellites) and potential combinations of techniques to optimally depict the 3D structure of PBL temperature and water vapor.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Synergistic Use of Satellite Data at EUMETSAT for improved Planetary Boundary Layer Detection

Authors: Axel Von Engeln
Affiliations: EUMETSAT
EUMETSAT has been operating satellites since 1977, initially only in geostationary orbit, and with the addition of the EUMETSAT Polar System (EPS) in 2006, also in low Earth orbit. In particular the addition of the EPS satellites provide a rich data source, with several instruments providing collocated measurements over a swath of up to 2900km. The EPS instruments cover cloud information (e.g., AVHRR-3 instrument), temperature and water vapour profile information at microwave, infrared, and using GPS frequencies (e.g., IASI, MHS, AMSU-A, GRAS instruments), as well as several instruments operating in the UV, visible, near infrared (e.g., GOME-2, AVHRR-3 instrument) . This data sets spans more than 15 years; several years have had 2-3 EPS satellites in orbit. Regular reprocessing activities assure that earlier datasets are consistently processed with the latest available processor, thus long term data assessments are possible. Following the identification of the Planetary Boundary Layer (PBL) as an incubation targeted observable within the latest Decadal Survey in the US, EUMETSAT has started to assess its data records, and identified possible synergistic uses for improved PBL detection. This includes combination of microwave, infrared, and radio occultation instruments for temperature and water vapour profiling (where AVHRR-3 provides cloud coverage information), as well as the use of radio occultations for PBL height detection (working together with the Radio Occultation Meteorology Satellite Application Facility, ROM SAF). Additionally, use of the UV, visible, near infrared instrument fleet will be assessed to provide aerosol, cloud, total column water vapour and other information. The presentation will give an overview of the EUMETSAT data sets that can already be exploited for PBL retrievals, present selected results obtained from single instruments, provide an overview of the planned next steps, and also discuss use of the next-generation satellites in geostationary (MTG-S) and polar (EPS-SG) orbit.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Profiling the Planetary Atmospheric Boundary Layer From Space: the Perspective of “Space It Up!”

Authors: Domenico Cimini, Maria Pia De Natale, Francesco Di Paola, Donatello Gallucci, Sabrina Gentile, Edoardo Geraldi, Salvatore Larosa, Saverio T. Nilo, Elisabetta Ricciardelli, Filomena Romano, Mariassunta Viggiano, Dr Thomas August, Axel Von Engeln
Affiliations: CNR-IMAA, ESA, EUMETSAT
"Space It Up!" is a multidisciplinary project funded by the Italian Ministry of University and Research (MUR) and the Italian Space Agency (ASI), aiming at developing space-borne solutions with breakthrough potential, ranging from Earth observations (EO) to human extraterrestrial exploration. "Space It Up!" is organised in thematic Spokes, among which Spoke 7 ("Space for the sustainable development of the planet") aims to increase the technology readiness level of EO solutions to improve current capabilities in process observation and prediction and fostering the achievement of heterogeneous sustainable development goals (SDGs). Activities have started in August 2024 and will last for three years. The importance of Planetary Atmospheric Boundary Layer (PABL) profiling is recognized for several SDGs and societal needs, such as climate action, severe weather hazards, renewable energy, air pollution, and food production. ABL profiling from ground-based remote sensing has increased in the last decade, leading to the establishment of international networks, such as the European EPROFILE program (Rüfenacht et al., 2021). However, ABL profiling from ground is limited to instrumented sites and lacks global coverage. In this framework, a review of the current technologies available for PABL profiling from space is being performed, looking at advantages, limitations, and future perspectives. In particular, a study is being performed to investigate the potential of combined microwave and infrared (MW-IR) satellite observations to detect PABL height and retrieve PABL temperature and humidity profiles. A machine learning approach is applied to simulated observations from MW-IR sensors on current EUMETSAT Polar System (EPS) MetOp series, namely the Advanced Microwave Sounding (AMSU), the Microwave Humidity Sounder (MHS), and the Infrared Atmospheric Sounding Interferometer (IASI). In addition, the same approach is applied to instruments to be launched on EPS Second Generation (EPS-SG) series, i.e., the Microwave Sounder (MWS) and IASI Next Generation (IASI-NG), providing enhanced spatial and spectral resolutions, as well as lower instrumental noise, with respect to their predecessors, offering increased potential for PABL profiling. This presentation will provide the perspective of “Space It Up!” to PABL profiling from space. The opportunity to validate the space-borne retrievals against nearly continuous ground-based observations will also be discussed, presenting the available products from EPROFILE networks of hundreds of ceilometers and Doppler wind lidars (delivering PABL height) and tens of microwave radiometers (delivering PABL temperature and humidity profiles) in Europe. Rüfenacht, R., Haefele, A., Pospichal, B., Cimini, D., Bircher-Adrot, S., Turp, M., Sugier., J,. EUMETNET opens to microwave radiometers for operational thermodynamical profiling in Europe. Bull. of Atmos. Sci.& Technol. 2, 4, https://doi.org/10.1007/s42865-021-00033-w, 2021.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Hyperspectral PBL Thermodynamic Structure Observations from Photonic Integrated Circuit Microwave Radiometers

Authors: Patrick Stegmann, Narges Shahroudi, Alexander Kotsakis, Stephen Nicholls, Fabrizio Gambini, Antonia Gambacorta
Affiliations: NASA GSFC, UMD ESSIC, SSAI, UMBC
Remote sensing of the full three-dimensional (3D) thermodynamic structure of the Planetary Boundary Layer (PBL) segment of the terrestrial atmosphere on the basis of passive satellite-based radiometers remains a significant challenge to the scientific community. The 2017 NASA Decadal Survey (DS, NASEM, 2018a) and the NASA PBL Incubation Study Team Report (STR) (Teixeira et al., 2021) have identified retrievals of PBL 3D thermodynamic structure with enhanced horizontal and vertical resolution and PBL height with enhanced fidelity as cornerstones for future advances in Earth System Science (ESS). Passive hyperspectral microwave (MW) radiometers offer a solution for 3D PBL thermodynamics retrievals under all-sky conditions, i.e. not only for a clear-sky atmospheric state without clouds or aerosols present in the scene. However, conventional technologies do not permit to construct such instruments that are compact enough for deployment in orbit and have a sufficient signal-to-noise ratio (CN0). Photonic Integrated Circuit technology employed at NASA Goddard Spaceflight Center (GSFC) offers a means to construct such instruments while simultaneously reaching an unprecedented Size, Weight, Power and Cost (SWaPC) minimum for a potential smallsat deployment. Hyperspectral MW instruments currently in testing and development at NASA GSFC are CoSMIR-H, HyMPI, and AURORA Pathfinder. The first CoSMIR-H observation data were recently collected during the WH2yMSIE field campaign at the US West Coast and Rocky Mountains. At the same time, algorithmic infrastructure to process these hyperspectral MW observations and retrieve PBL thermodynamic profiles is under development based on the Community Radiative Transfer Model (CRTM). Following this approach ensures operational readiness of the instrument data pipeline and compatibility with the infrastructure of NASA partners, such as NOAA. The CRTM is integrated into the Microwave Integrated Retrieval System (MIRS) used operationally at NOAA for retrievals of MW observations from conventional instruments such as e.g. ATMS. However, several sources of uncertainty still exist when it comes to the interpretation of the PBL observations and the sheer amount of hyperspectral data will pose a significant challenge for any conventional operational retrieval algorithm.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Profiling Arctic Tropospheric Water Vapor Using the Differential Absorption G-band Radar GRaWAC

Authors: Sabrina Schnitt, Mario Mech, Jens Goliasch, Thomas Rose, Linnu Bühler, Nils Risse, Susanne Crewell
Affiliations: Institute for Geophysics and Meteorology, University of Cologne, Radiometer Physics GmbH
Low-tropospheric water vapor is a central component of multiple feedback processes known to contribute to amplified warming in the Arctic. Continuous, highly resolved all-weather profiling observations are key to advance the understanding of the Arctic water cycle in a rapidly changing Arctic climate and to improve the representation of PBL mixed-phase clouds in modeling. Current state-of-the-art measurement techniques, yet, are limited by the occurrence of clouds, precipitation, and polar night, or lack the needed temporal or vertical resolution. The newly emerging Differential Absorption Radar (DAR) technique can overcome some of these challenges as in-cloud water vapor profiles can be derived continuously. We illustrate the advantages of this novel technique for the Arctic PBL based on recent measurements obtained from the novel and unique G-band Radar for Water vapor and Arctic Clouds (GRaWAC). GRaWAC is a Doppler-capable, FMCW G-band radar with simultaneous dual-frequency operation at 167 and 175GHz. Our recent measurement suite includes observations from AWIPEV station, Ny-Alesund, from the central Arctic aboard a RV Polarstern cruise, and along the Norwegian coast aboard AWI’s Polar-6 research aircraft. We apply the DAR technique to our measurements to derive temporally continuous in-cloud profiles in cloudy and precipitating conditions. When deployed from aircraft, we additionally retrieve the column amount in clear-air conditions. We investigate advantages and limitations of water vapor profiles derived from the stand-alone DAR technique including cloud properties, retrieval resolution, and accuracy. Additionally, we illustrate alternative water vapor retrieval methods by making use of the synergy with passive microwave radiometer or conventional cloud radar measurements. By embedding GRaWAC measurements in a multi-frequency cloud radar synergy, we find fingerprints of precipitation-forming processes, and highlight the potential of our measurements for future model evaluation studies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.11/0.12)

Presentation: Daytime convective development over land: The role of surface forcing

Authors: Wojciech Grabowski
Affiliations: NCAR
Water availability at the Earth surface determines the partitioning of the surface heat flux into its sensible and latent components, that is, the surface heat flux Bowen ratio. The two components affect differently the surface buoyancy flux and thus the development and growth of the convective boundary layer. As a result, the Bowen ratio has a critical impact on the daytime dry and moist convection development over land. We use two canonical modeling test cases, one for the shallow convection and one for the shallow-to-deep convection transition, to document the impact of the surface Bowen ratio on daytime convection development. A simple approach is used where results from simulations featuring the original setup are contrasted with simulations where surface sensible heat flux takes on values of the latent heat flux and vice versa. Such a change illustrates the key impact of the surface water availability without changing the total surface heat flux that itself affects the convective boundary layer development. Because of the larger surface buoyancy flux, simulations with the reversed surface heat fluxes feature faster deepening of the convective boundary layer and wider clouds once moist convection develops. Mean cloud base width of cumulus clouds increases as the boundary layer deepens. A simple explanation is provided of why deeper well-mixed convective subcloud layer results in wider clouds. The key is a larger width of boundary layer coherent updraft structures when the convective subcloud layer is deeper. We also document an important role of lower-tropospheric horizontal flow that affects cloud-base width of convective clouds. This is because of contrasting organization of boundary layer eddies in purely convective versus mixed convective/shear-driven boundary layers.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Session: B.03.06 Climate, Environment, and Human Health - PART 3

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Integrating Hydrological Simulations and High-resolution Water Quality Parameters to Characterize the Influence of River Plumes on Aquaculture Sites in the Coastal Waters of Abruzzo Region, Italy

Authors: Carla Ippoliti, Susanna Tora, Federico Filipponi, Romolo Salini, Barbara Tomassetti, Federica Di Giacinto, Alessio Di Lorenzo, Annalina Lombardi, Carla Giansante, Annamaria Conte
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, National Research Council (CNR), Center of Excellence in Telesensing of Environment and Model Prediction of Severe Events (CETEMPS), University of L’Aquila
River plumes result from the dispersion of sediments, nutrients, and other materials carried by rivers into the sea: they can be identified by their distinct salinity, temperature, turbidity, and optical properties, which are measurable via Satellite Earth Observation (SEO). These plumes can contain a high concentration of nutrients for phytoplankton (nitrogenous and phosphorous substances), but also organic discharges, and therefore can influence the productivity and health of aquaculture systems. Monitoring the spatial distribution of suspended solids and the concentration of phytoplankton, through the estimation of chlorophyll-a parameter, near aquaculture sites is essential to assess the growth potential of molluscs and to mitigate potential health risks. Such knowledge can guide the development of targeted prevention measures and adaptive management strategies ensuring the sustainability and resilience of aquaculture operations in coastal regions under climate change conditions. In addition, it can also contribute in identifying the most suitable production areas. Notably, aquaculture is becoming increasingly important as a source of high-quality food, and hence for supporting and maintaining a healthy population. In this study, we integrated river discharge data, modelled through the Cetemps Hydrological Model (CHyM) for the main rivers in Abruzzo region, with high-resolution water quality parameters estimated from Copernicus Sentinel-2 imagery over the coastal area in the central Adriatic Sea. This approach allowed us to characterize, at 10 meters spatial resolution, the distribution of turbidity and chlorophyll-a concentrations reaching mollusc farms located along the coastal area in the period 2016-2024. To improve the accuracy of satellite-based mapping parameters estimated from water inherent optical properties, Case-2 Regional Coast Colour (C2RCC) algorithm was regionally calibrated using a set of in situ acquisitions along the Abruzzo coast collected during 12 boat campaigns (years 2019-2020), and in 20 sampling points distributed between Pescara river mouth and a mussel farm. We generated time series of turbidity and chlorophyll-a concentration at 10 m spatial resolution, using all available and cloud-free Sentinel-2 MSI satellite acquisitions in the period 01 July 2016 - 31 December 2024. The ChyM model estimates hourly discharge rates (m³/h) at the estuaries of Abruzzo's main rivers. These estimates are obtained through hydrological simulations forced by high-resolution rain gauge and temperature observations. The fine spatial and temporal resolution of these observations allows for a realistic representation of precipitation patterns and their impact on river flow. For each aquaculture site, the distribution of data of the two parameters were summarised highlighting the anomalies. These anomalies were identified across the time series, and correlated with the river discharge quantities. Typically, according to the in situ data, turbidity decreases and salinity increases when moving away from the coast. This general trend, expected when the fresher and colder waters of the river mix with the saltier and warmer marine waters, is well captured by SEO imagery: turbidity values in the coastal waters (0-3 nautical miles NM) of Abruzzo region has a mean value 4.48 FNU and a standard deviation of 1.58. Chlorophyll-a mean value is 0.20 and 0.04 standard deviation, in 0-3 nautical miles from coastal waters of the Abruzzo region, indicating oligotrophic waters. On specific dates, river discharge significantly influenced turbidity and chlorophyll-a distribution, as detected through combined analysis of Sentinel-2 and CHyM outputs. This integration of hydrological and SEO data highlights the value of multi-source approaches for monitoring coastal ecosystems. This approach would help identifying and quantifying deviations during extreme events, when the plume may extend further or shift direction, posing potential risks to the aquaculture sites and molluscs production.
LPS Website link: Integrating Hydrological Simulations and High-resolution Water Quality Parameters to Characterize the Influence of River Plumes on Aquaculture Sites in the Coastal Waters of Abruzzo Region, Italy&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Earth Observation Insights on Climate-Induced Shifts in Culicoides imicola Distribution: A Vector-Borne Disease Perspective in Europe and the Mediterranean

Authors: Lara Savini, Annamaria Conte, Maria Goffredo, Tommaso Orusa, Michela Quaglia, Miguel Ángel Miranda, Thomas Balenghien, Luca Candeloro
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, Applied Zoology and Animal Conservation research group, University of the Balearic Islands, Palma, CIRAD, UMR ASTRE, F-34398 Montpellier
Vector-borne diseases (VBDs) represent an increasing global health threat, with climate change significantly altering the habitats of key insect vectors species. Among these, the biting midge species Culicoides imicola stands out as a major field transmitter of several viral diseases affecting livestock, such as Bluetongue, Epizootic Hemorrhagic Disease, and African Horse Sickness, and has been involved in Schmallenberg virus transmission. Climatic factors are crucial in driving the global distribution of C. imicola, which is expected to shift significantly under climate change scenario. This study focuses on modeling the climatic and environmental suitability of C. imicola as part of a broader effort within the Horizon Europe WiLiMan-ID project, which aims to integrate pathogen, host, and climatic-environmental data to address high-priority animal diseases, contributing to global food security, economic stability, and public health. We modeled the climatic and environmental suitability of C. imicola across Europe and the Mediterranean Basin, producing publicly accessible raster datasets with a spatio-temporal resolution of 1 km and an 8-day interval. These datasets span over six decades, covering the period from 1960 to the present. Our approach integrates Earth Observation (EO) data with machine learning (ML) techniques. Livestock density data, derived from FAO's global livestock distribution maps, were combined with key climatic-environmental predictors — including temperature, precipitation, vegetation indices, soil moisture, solar radiation, surface vapor pressure deficits, and wind speed — identified through an extensive literature review and sourced from reliable EO datasets such as MODIS, ERA5, CHIRPS, and VIIRS. The 8-day temporal resolution captured critical seasonal dynamics influencing C. imicola suitability, allowing us to identify favorable periods for vector occurrence. Presence-absence data were sourced from the Italian entomological surveillance plan (2000–present) and enriched with records from other countries within the study area. ML algorithms were trained over the past two decades of observations to predict the probability of C. imicola occurrence, and predictions were extrapolated backward to assess changes since 1960. This framework enabled the analysis of long-term trends and the evaluation of climate change impacts on the vector’s distribution. Preliminary results highlight a significant expansion in C. imicola climatic and environmental suitability over time, accompanied by an extended seasonal activity window. These trends demonstrate the strong correlation between climate change and vector adaptation, with critical implications for VBD transmission in both newly colonized and historically affected areas. In these established areas, climate change extends the favorable conditions for vector survival and reproduction over longer periods of the year, leading to significantly larger vector populations and an elevated risk of disease transmission throughout an expanded timeframe when conditions are favorable for virus replication. The open-access raster datasets generated in this study provide resources for epidemiological modeling and proactive vector surveillance. By elucidating the interplay between climate dynamics and vector ecology, this work supports the development of targeted control strategies and policies to mitigate the impact of emerging VBD threats in a changing world.
LPS Website link: Earth Observation Insights on Climate-Induced Shifts in Culicoides imicola Distribution: A Vector-Borne Disease Perspective in Europe and the Mediterranean&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: The impact of extreme weather on the spread of water-associated diseases in a tropical wetland region

Authors: Gemma Kulk, Dr Anas Abdulaziz, Dr Shubha Sathyendranath, Dr Nandini Menon, Ranith Rajamohananpillai, Jasmin Chekidhenkuzhiyik, Grinson George
Affiliations: Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory, CSIR-National Institute of Oceanography, Nansen Environmental Research Centre India, ICAR-Central Marine Fisheries Research Institute
Water is an essential natural resource, but increasingly water also forms a threat to the human population. Global warming, shifts in precipitation patterns and extreme weather conditions lead to water stress, including natural disasters such as floods or droughts that can cause severe damage to the environment, property and human life. A less studied aspect of such events is the impact on human health through water-associated diseases and on wellbeing through mental health problems. Action to reduce the risk is urgently needed, with more frequent floods and droughts already leading to climate refugees. Earth Observation has the potential for developing cost-effective methods to monitor risks to human health from water stress, with free and open data available at the global scale. In this study, we present the application of remote sensing observations to map flooded areas, using the tropical Vembanad-Kol-Wetland System in the southwest of India as a case study. In August 2018, this region experienced an extremely heavy monsoon season, which caused once-in-a-century floods that led to nearly 500 deaths and the displacement of over a million people. We investigate the use of different satellite sensors to increase temporal coverage of flood maps and combine this information with field measurements of human pathogens, such as Vibrio cholerae, Escherichia coli and Leptospira, and information on disease outbreaks to further study the contamination of natural water bodies during the course of the year in 2018. Further analysis of the satellite data record from 2016 to 2024 showed increased flood risk in the region surrounding Lake Vembanad during this period, with potential consequences for the spread of water-associated diseases and impact on human health. The results indicate the need for improving sewage treatment facilities and city planning in flood-prone areas to avoid the mixing of septic sewage with natural waters during extreme climate events and water stagnation and logging even during the normal monsoon season.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Earth Observation for risk-based Vector-Borne Disease surveillance under a changing climate

Authors: Luca Candeloro, Carla Ippoliti, Laura Amato, Francesco Valentini, Susanna Tora, Valentina Zenobio, Chadia Wannous, Rachid Bouguedour, Paolo Calistri, Annamaria Conte, Alessandro Ripani
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale", World Organisation for Animal Health, Sub-Regional Representation for North Africa, World Organisation for Animal Health
Climate change is increasingly reshaping environmental conditions, profoundly impacting the seasonality, distribution, and frequency of vector-borne diseases (VBDs) and their associated vector species. Leveraging Earth Observation (EO) data to map climate, water, and landscape features offers invaluable insights for developing epidemiological models and tools to support targeted surveillance and enable Early Warning Systems for VBDs. In this context, the World Organization for Animal Health (WOAH) funded the PROVNA project, an initiative to support North African Veterinary Services by creating an innovative tool to optimize the surveillance and control of vector-borne climate-sensitive diseases. The study area—encompassing Algeria, Egypt, Libya, Mauritania, Morocco, and Tunisia—was classified into ecoregions, defined as zones with homogeneous ecological and climatic conditions and then possibly suitable to host same vectors responsible of viral transmission. Rift Valley fever (RVF), a zoonotic disease of significant concern, was selected as the primary VBD of interest. EO data products from 2018 to 2022 (e.g., MODIS Land Surface Temperature, MODIS Normalized Difference Vegetation Index, SMAP Soil Moisture, TAMSAT Rainfall, MODIS Normalized Difference Water Index) at 250m/16-day resolution, were processed, aggregated and standardized at seasonal and annual levels. An unsupervised neural network clustering method, the Super Self-Organizing Map (Super-SOM), was employed to create an interpretable, topology-preserving map of North Africa. Initially, a detailed 40x40 neuron grid comprising 1600 nodes was trained to identify 1600 distinct ecoregions. Subsequently, an affinity propagation clustering algorithm was applied to the 1600 nodes, reducing them to 55 ecoregions per year. The results, shared with national authorities through webinars, bilateral discussions, and an in-person workshop, included the delineation of ecoregions and temporal analyses across countries. These analyses identified areas more susceptible to interannual variation, allowing for the prioritization of specific surveillance strategies based on the unique characteristics of each ecoregion. By integrating advanced EO data analytics with epidemiological insights, the tool supports Veterinary Services in implementing targeted, risk-based surveillance, optimizing both financial and human resources through strategic planning. Climate change exacerbates the geographic shifts and seasonal dynamics of VBDs, underscoring the importance of long-term monitoring of essential climate variables via satellite. This project exemplifies the potential of EO-based approaches to adapt surveillance strategies to the challenges posed by climate variability and extreme events, aligning with WOAH’s regional strategy for controlling vector-borne and transboundary animal diseases.
LPS Website link: Earth Observation for risk-based Vector-Borne Disease surveillance under a changing climate&location=Room+0.94/0.95" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Leveraging Earth Observation Data and Explainable AI for Predicting Mosquito-Borne Disease Outbreaks

Authors: Dimitrios Sainidis, Konstantinos Tsaprailis, Lazaros Stamogiorgos, Dr. Charalampos
Affiliations: National Observatory Of Athens
In the early 21st century, rapid urbanization, the expansion of international trade, and increased human mobility, coupled with the changing climate, have significantly changed the distribution of disease vectors such as mosquitoes, ticks, and fleas. These changes have allowed these vectors to expand into new regions, increasing the spread of the pathogens they carry and the diseases they transmit to humans. Vector-borne diseases (VBDs) are a major global public health concern, accounting for 17% of all infectious diseases worldwide and causing approximately 700,000 deaths annually. In recent years, these diseases have begun to affect previously unaffected countries many of them in the European peninsula. Among VBDs, the majority of deaths are caused by mosquitoes, which transmit diseases such as West Nile Virus, Malaria, Zika, Dengue, and Chikungunya. This makes mosquitoes the deadliest animals on the planet. Consequently, much of the VBD research community's efforts are focused on mosquitoes and mosquito-borne diseases (MBDs). One of the most critical and challenging problems in controlling MBDs is accurately predicting the risk of future outbreaks. Timely and accurate risk maps enable health authorities to implement appropriate mitigation strategies to prevent outbreaks. Recent research has focused extensively on predicting mosquito abundance and estimating the likelihood of human virus outbreaks using machine learning models, as these two factors are closely corelated. Another significant challenge is making sense of these risk maps, especially when they are created using complex machine learning models. Recent advances in explainable AI (XAI) provide valuable tools to interpret and analyze these predictions. XAI helps uncover the key factors influencing the model's outputs, making the results more transparent and easier to understand. This clarity allows public health officials and policymakers to trust the models' findings and make more informed decisions, ultimately improving the effectiveness of strategies to prevent and control disease outbreaks. This work addresses both challenges of accurate outbreak prediction and model interpretability by developing an interpretable Early Warning System (EWS) for predicting the risk of West Nile Virus (WNV) outbreaks. The EWS is designed to operate at an exceptionally fine spatial resolution of 2x2 square kilometers and a temporal prediction window of one month. It integrates diverse data sources, including big Earth Observation (EO) data, statistical census data, in-situ mosquito population observations, predicted mosquito abundance data, and historical WNV case records from Greece. By leveraging this comprehensive dataset, Machine Learning models are trained to predict WNV outbreak risk. To ensure the model's outputs are interpretable and actionable, the SHAP (SHapley Additive exPlanations) methodology is applied to analyze the predictions at both local and global levels. This analysis identifies the most influential factors contributing to WNV outbreak, offering insights into the environmental, demographic, and biological drivers of transmission. The resulting system not only provides accurate, high-resolution risk maps but also enhances transparency and trust in the predictions, enabling public health officials to make informed decisions. The predictive model was trained using historical West Nile Virus (WNV) outbreak data spanning 2010 to 2021, with a spatial resolution of 2x2 square kilometers across the Greek territory. The model generates a risk score ranging from 0 to 1 for each grid cell, representing the likelihood of a WNV outbreak. To evaluate the model's performance, binary classification metrics were employed. Using a classification threshold of 0.5, the model achieved a recall score of 0.87, demonstrating its effectiveness in identifying areas at risk of WNV outbreaks. Additionally, the area under the precision-recall curve (PR-AUC) was 0.7, which is noteworthy given the significant class imbalance in the dataset, where the ratio of non-cases to case instances was approximately 500:1. These results indicate the model's robustness in accurately predicting outbreak risks even in the presence of highly imbalanced data, highlighting its potential as a reliable tool for early warning and public health interventions. The explainability analysis revealed several key environmental factors influencing the risk of West Nile Virus (WNV) outbreaks. One of the most significant factors identified by the model is the nightly surface temperature of the previous month. There is a strong positive correlation between higher nighttime temperatures and increased WNV risk, suggesting that warmer conditions may enhance mosquito activity or virus transmission dynamics. Another critical factor is elevation, where a clear inverse relationship is observed. Areas at higher elevations are associated with lower WNV risk. Rainfall patterns also exhibit complex relationships with WNV transmission. Accumulated rainfall since January shows a nonlinear effect where rainfall below 500 mm has a negative influence on WNV risk, while amounts between 500 mm and 1100 mm increase the risk. However, rainfall exceeding 1100 mm again reduces the risk, indicating that both very low and very high rainfall levels can negatively impact WNV transmission. This aligns with the known effect of rainfall on the mosquito breeding cycle, where the formation of stagnant waters increases possible mosquito breeding sites while heavy rainfalls tend to flush the larvae from the breeding sites. Additionally, the annual precipitation from previous years plays an important role, with accumulated rainfall below 1100 mm having a slight positive effect on risk, whereas rainfall above this threshold shows a slight negative influence. These insights highlight the intricate interactions between environmental factors and WNV transmission, providing valuable guidance for targeted surveillance and intervention strategies. In conclusion, this study presents a robust and interpretable Early Warning System (EWS) for predicting West Nile Virus (WNV) outbreaks, leveraging high-resolution data and advanced machine learning techniques. By integrating diverse datasets and employing SHAP analysis for explainability, the model not only achieves good predictive capability but also provides actionable insights into the environmental and geographical factors driving WNV transmission. The system’s ability to generate transparent, fine-grained risk maps enhances trust and usability for public health officials, enabling proactive and targeted interventions. The dependence on Earth Observation and MBD case data allows this solution to be applied anywhere and for a plethora of Mosquito Borne Diseases with a few modifications. These advancements underscore the potential of combining machine learning and explainable AI to address critical challenges in controlling mosquito-borne diseases and safeguarding public health.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.94/0.95)

Presentation: Extreme weather events, changing land use patterns and microbial pollution escalate outbreaks of leptospirosis in coastal regions along the southwest coast of India

Authors: Anas Abdulaziz, P Sreelakshmi, Nizam Ashraf, Ranith Rajamohananpillai, Dhritiraj Sengupta, Gemma Kulk, Dr Nandini Menon, Grinson George, Jasmin Chekidhenkuzhiyil, Dr Shubha Sathyendranath
Affiliations: CSIR-National Institute of Oceanography, Nansen Environmental Research Centre India, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory, ICAR- Central Marine Fisheries Research Institute
Environment plays a major role as an intermediary in the spread of zoonotic diseases, with pathogens derived from a reservoir host being released into the environment and then infecting a new host or population. Leptospirosis is a zoonotic water-associated disease that is becoming increasingly prevalent in regions susceptible to extreme weather events. An analysis of decadal datasets on the incidence of leptospirosis in the areas surrounding Vembanad Lake, a Ramsar site along the southwest coast of India, indicates that this region is endemic to the disease. The Vembanad Lake and its surrounding areas experience significant rainfall during the monsoon season (June to September) and often suffer from flash floods during this period. The established risk factors for leptospirosis are contact with water contaminated by Leptospira, which usually occurs in flood situations in places with the presence of rodents and other zoonotic animals associated with poor sanitation. We conducted monthly monitoring at 13 stations along Vembanad Lake from 2018 to 2019 for a one-year period at 20-day intervals, during which the region experienced a once-in-a-century flood in August 2018. Molecular surveillance of Leptospira in the water column showed that the pathogen was present in the lake year-round. Interestingly, the distribution of these pathogens was notably higher during the warm season (November to April), while the incidence of the disease peaks during the rainy season. In 2018, around 50% of the reported leptospirosis cases occurred following the flood, confirming the fact that during the rainy season, humans are frequently exposed to water contaminated with Leptospira, leading to infection and disease outbreaks. Changes in land use patterns and inadequate solid waste management may exacerbate the prevalence of Leptospira in the region. Additionally, extreme weather events facilitate greater contact between pathogens and humans, resulting in increased disease incidence. This study highlights the urgent need to address environmental factors that influence host-microbe interactions to curb the rise of zoonotic water-associated diseases. Climate emergencies have to be tackled by including environmental influences on issues of medical and ecological importance.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Session: A.09.04 Glaciers - the other pole - PART 1

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Regional Glacier Elevation Changes Assessment from Optical DEM Time Series

Authors: Livia Piermattei, Francesco Ioli, Clare Webster, Lucas Kugler, Désirée Treichler, Enrico Mattea, Robert McNabb
Affiliations: Department of Geography, University of Zurich, Department of Geosciences, University of Oslo, Department of Geosciences, University of Fribourg, School of Geography and Environmental Sciences, Ulster University
This study is part of the Glacier Mass Balance Intercomparison Exercise (GlaMBIE), which aims to collect, homogenise, and estimate global and regional assessments of glacier mass balance using the main observation methods. Here, we present our assessment of glacier elevation change using the geodetic method (DEM differencing) based on spaceborne optical data. We exploited the potential of the SPOT-5 satellite, operational from 2002 to 2015, which provided global coverage. Since 2021, the SPOT 1-5 image archive has been freely available as part of the SPOT World Heritage program run by CNES. However, observation periods vary across regions, limiting temporal coverage to less than five years in some areas. Iceland is selected as a pilot study due to its extensive SPOT-5 temporal coverage, further complemented by ArcticDEM data. Our methodology is also applied to other regions with sufficient temporal coverage, and historical aerial images are incorporated to extend the analysis back to the last century. The workflow starts with generating DEM time series at a regional scale and homogenising the data, including DEM co-registration, selection, noise filtering and void filling. To address challenges posed by sparse DEM time series, especially when including historical DEMs, we developed a method to extrapolate elevation changes over 10-year intervals using the combined DEM time series. This method relies on the assumption that a relationship exists between elevation change and elevation; therefore, an elevation trend can be derived for elevation bands. We extract median elevations for fixed elevation bands (i.e., 100 m bins) from the DEMs time series and interpolate these values over time using linear regression. Elevation data are then extrapolated for each band and pre-defined periods, and area-weighted mean elevation changes are calculated for each glacier using RGI7.0. For comparability, we also applied our approach to derive elevation changes from time series of ASTER DEMs and compared our results with the pixel-based multi-temporal approach of Hugonnet et al. (2021) over a common observation period. Regional and individual glacier estimates from both methods are evaluated. This work discusses key challenges in using spaceborne optical data for regional glacier elevation change assessments, including limitations in the temporal coverage of SPOT-5, issues with DEM generation, co-registration, noise filtering, void filling, and methods for estimating mean elevation changes. Our findings contribute to improving regional assessments of glacier mass balance and advancing geodetic approaches using optical DEM time series.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Monitoring glaciers with CryoSat-2 altimetry – opportunities and challenges

Authors: Sophie Dubber, Livia Jakob, Morag Fotheringham, Noel Gourmelen, Carolyn Michael, Andrea Incatasciato, Julia Bizon, Jérôme Bouffard, Alessandro Di Bella, Tommaso Parinello
Affiliations: Earthwave, University of Edinburgh, ESA
The large footprint of radar altimeters has traditionally limited their use to ice sheets, however the launch of CryoSat-2 – the first altimetry mission to carry a synthetic aperture radar interferometer – enabled the monitoring of environments beyond the two ice sheets. By using Swath processing applied to CryoSat-2 data, we can measure changes in elevation over rough terrain such as ice caps and mountain glaciers, and do so at high spatial and temporal resolution. This approach provides unique opportunities to better understand the behaviour of these regions. Additionally, constantly advancing processing techniques continue to add to the expanding toolbox of global glacier mass balance measurements, by enabling monitoring of extremely challenging terrains. As a result of these advances, it is now possible to perform assessments of global glacier volume and mass changes, as presented in Jakob & Gourmelen (2023). Here we present an updated version of this assessment, which provides global glacier changes between 2010 - 2023. This new study uses updated glacier outlines from v7.0 of the Randolph Glacier Inventory, includes 5 additional regions and utilises improved algorithms. The expansion of this work to new regions is possible due to additional coverage recently added to the CryoTEMPO EOLIS (Elevation Over Land Ice from Swath) products. This Swath processed CryoSat-2 dataset now includes additional coverage of mountain glacier regions with challenging terrain conditions: Scandinavia, Western Canada & US, Central Europe, Low Latitudes and New Zealand. We will show how the addition of these regions in this updated assessment enables further insights into the global picture of glacier mass change. We will also give an introduction to the suite of CryoTEMPO EOLIS products, including case studies of how they can be utilised over small mountain glaciers and rapidly changing ice caps. We will discuss the opportunities and challenges of radar altimetry as a tool to measure glacier changes, as well as looking forward to even more detailed monitoring of glaciers’ health using the upcoming CRISTAL mission.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Advancing reconciled regional & global glacier mass changes with the second Glacier Mass Balance Intercomparison Exercise (GlaMBIE-II)

Authors: Livia Jakob, Michael Zemp, Inés Dussaillant, Samuel Nussbaumer, Sophie Dubber, Noel Gourmelen
Affiliations: Earthwave, University of Zurich, University of Edinburgh
Glacier changes are a sign of climate change and have an impact on the local hazard situation, region runoff, and global sea level. In previous reports of the Intergovernmental Panel on Climate Change (IPCC), the assessment of glacier mass changes was hampered by spatial and temporal limitations as well as by the restricted comparability of different observing methods. The Glacier Mass Balance Intercomparison Exercise (GlaMBIE; https://glambie.org) aims to overcome these challenges in a community effort to reconcile in-situ and remotely sensed observations of glacier mass changes at regional to global scales. GlaMBIE is now entering its second phase (GlaMBIE-II), in an effort to improve upon the first approach and results of the new data-driven reconciled estimation of regional and global mass changes from glaciological, DEM-differencing, altimetric, and gravimetric methods. This presentation will highlight GlaMBIE’s findings, emphasising its implications for regional glacier mass loss and global sea-level rise. It will also explore lessons learned from the first phase, discuss persistent differences among observational methods, and identify other ongoing challenges. Additionally, preliminary results from a pilot study aiming to enhance the spatial and temporal resolution of glacier mass balance data will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Towards a Flexible, Data Assimilation Framework for Global Glacier Modelling

Authors: Patrick Schmitt, Fabien Maussion, Lea Hartl, Dan Goldberg
Affiliations: University Of Innsbruck, University of Bristol, Austrian Academy of Sciences, University of Edinburgh
Mountain glaciers are crucial to the Earth's water systems. As they shrink and lose ice globally, they contribute to rising sea levels and pose challenges for water supply, hydropower, agriculture, and natural disaster management. To address these challenges effectively, dynamic glacier models are essential. Recent advances in Earth observation (EO) products, including geodetic glacier mass balance, glacier outlines and ice velocity, offer new opportunities to improve global glacier models. Assimilating heterogeneous datasets in a dynamically consistent modelling framework remains very challenging. This contribution highlights findings from a recent study (preprint: [https://doi.org/10.5194/egusphere-2024-3146]) focusing on the data-rich Ötztal and Stubai ranges in western Austria. By adapting the Open Global Glacier Model (OGGM) to include these high-resolution, multitemporal observational datasets, the model's performance significantly improved compared to using global, lower-resolution data. For the first time, the model simultaneously matched observed area and volume changes on a regional scale, boosting confidence in regional projections. Projections for the region show that only 2.7% of the 2017 glacier volume will remain by 2100 under a +1.5 °C global warming scenario, a more pessimistic outlook than previous studies. Under a +2 °C scenario, this volume is reached roughly 30 years earlier, with near-total deglaciation by 2100 (0.4% of the 2017 volume remaining). The presented approach represents a significant step forward compared to earlier regional assimilation methods. However, it is tailored to specific observations and lacks flexibility to accommodate additional or alternative datasets. To overcome this limitation, we are developing the Open Global Glacier Data Assimilation Framework (AGILE). This framework iteratively adjusts control variables to minimize discrepancies with observations using a cost function. AGILE leverages automatic differentiation through the machine learning framework PyTorch, enabling efficient computation of control variable sensitivities. Its flexibility allows it to integrate temporally and spatially diverse observational datasets and control variables, such as glacier bed heights, mass-balance parameters, and initial ice thickness. While AGILE's capabilities are currently being demonstrated in idealized experiments, the ultimate goal is for it to serve as the assimilation engine for a potential Digital Twin Component for Glaciers, part of ESA's Digital Twin Earth program.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Rapid response of Svalbard glaciers to ocean warming

Authors: Geir Moholdt, Josephine Maton, Jack Kohler, Øyvind Foss, Adrian Luckman, Marta Majerska, Alex S. Gardner, Johannes Fürst
Affiliations: Norwegian Polar Institute, Swansea University, Institute of Geophysics Polish Academy of Sciences, NASA Jet Propulsion Laboratory, University of Erlangen-Nuremberg
About one third of the glacier area of the Arctic drains towards ocean-terminating fronts that ablate by calving and melting above and below the waterline. This frontal ablation is a significant but poorly quantified part of the overall mass budget of Arctic glaciers, as well as an important source of freshwater and calved ice for marine ecosystems. We present a detailed analysis of frontal ablation for all Svalbard’s ~200 tidewater glaciers for 2013-2024, a period with abundant availability of satellite imagery. We account for changes in frontal position, surface velocity and ice thickness at time scales from monthly to yearly, and we separate the results into components of glacier retreat and ice discharge. Although the ice discharge can be high year-round, especially for surging glaciers, we find that almost all frontal ablation occurs from late summer to autumn when the ocean is warmer. This represents a delayed freshwater flux to the fjords and open ocean compared to surface meltwater runoff which is more confined to the peak of the atmospheric summer season. Annual frontal ablation was exceptionally high during 2016-2018 and 2022-2024, which coincides with periods of high inflow of Atlantic water and warmer temperatures in the upper ocean. Links with air temperature and meltwater runoff are less clear. The observed variability in frontal ablation demonstrates how reactive these glaciers are to ocean warming and that this should be considered in studies of marine environments and future glacier retreat.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.31/1.32)

Presentation: Multi-mission Investigation of a Recent Giant Glacier Collapse and Ice Avalanche in Tibet

Authors: Andreas Kääb, Luc Girod, Juditha Aga, Désirée Treichler
Affiliations: Department of Geosciences, University of Oslo
The most extreme type of glacier instabilities are large-volume detachments of entire low-angle tongues, usually not more than 20° steep. Internationally, this process was first documented for the 2002 Kolka Glacier, Caucasus, the tongue of which suddenly collapsed and sent a 130 10⁶ m³ ice-rock avalanche, up to 300 km/h fast, downvalley where it reached after 18 km the village Karmadon and further transformed into a 15 km long mudflow, in total claiming around 120 lives. This event was long believed to be a unique disaster, until large parts of a low-angle glacier in the remote Aru range, western Tibet, suddenly detached on 17 July 2016. The consequent 68 10⁶ m³ ice avalanche killed nine herders and hundreds of their livestock on the ~6 km long avalanche runout into the Lake Aru. While the investigation of this event had only just started, the neighbouring glacier detached in a very similar way than the first, causing a 83 10⁶ m³ ice avalanche reaching ~5 km far. The Aru twin glacier collapses directed the attention to the surprising detachments of low-angle glaciers, and triggered closer research that found that in total a dozen comparable events, including the above three, had happened worldwide, in Eurasia, North- and South America, during the recent decades. The most recent of these events, know to date, was the 130 10⁶ m³ detachment of the Sedongpu Glacier, Nyenchen Tanglha Mountains, south-east Tibet, that dammed up the Yarlung Tsangpo / Brahmaputra river for several days in late 2018. Comparison of all these around a dozen events revealed a number of individual differences, but suggested also similarities. Among the most prominent communalities seems the connection between catastrophic low-angle glacier detachments and glacier surges. Most detached glaciers showed signs of surge-like flow behaviour previous to their failure and/or have surge-type glaciers in their vicinity. Second, for several of the detachment sites particularly fine and soft sediments at the glacier bed are found or suggested, or appear possible from their lithological settings. Here, we describe and analyse an around 40 10⁶ m³ large, previously unnoticed, glacier collapse in eastern Tibet that happened in 2022 and provides important new insights into the processes involved in the detachments of low-angle glaciers. We highlight how the synergistic use of data and products from multiple satellite remote sensing missions enabled a close investigation of the event, despite having happened in one of the most remote regions on Earth. Sentinel-1 data were key to detect the event at all. Data from Sentinel-2, ASTER, and low-resolution sensors such as Sentinel-3 OLCI, MODIS and Suomi/NPP/VIIRS constrained the event date and time so that we could find it in seismic records. TanDEM-X data and the Copernicus DEM, optical stereo images from ASTER and very-high-resolution sensors, and ICESat-2 laser altimetry elevations led to collapse volume estimates. Time series of surface velocities on the failing glacier tongue could be reconstructed from repeat Sentinel-2 and Planet optical data, showing an exponential increase in speeds, up to 46 m/day as an average for the 24 hours before detachment. Many of the above types of image data further served the understanding of important details of the event, such as a consequent lake impact wave and its shore run-up, or the finding that no fine sediments seem to have been involved in the detachment – in contrast to most other glacier detachments known so far.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Session: D.01.04 Using Earth Observation to develop Digital Twin Components for the Earth System - PART 1

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Hydr’Avatar, toward a digital twin of hydrological systems using multi-complexity modelling and advanced EO datasets.

Authors: Adrien Paris, Pierre André Garambois, Brice Mora, Chloe Campos, Ludovic Cassan, Jean-François Crétaux, Catherine Fouchier, Laetitia Gal, Jérémie Hahn, Kevin Larnier, Thomas Ledauphin, Jérôme Monnier, Fabrice Papa, Vanessa Pedinotti, Jean Christophe Poisson, Sophie Ricci, Hélène Roux, Malak Sadki, Guy Schumann, Paolo Tamagnone, Nicolas Vila, Hervé Yesou
Affiliations: Hydro Matters, INRAE, CS Group, RSS-Hydro, CERFACS/CNRS, LEGOS UT, CNES/CNRS/IRD/UT3, SERTIT, INSA, Magellium, Vortex.io, Toulouse INP
As the hydrological cycle is changing worldwide, and the consequences of theses changes are directly impacting communities and their activities, there is a urgent need for a comprehensive representation of its different components. Major gaps still exist in domains like the observability of continental waters fluxes both from ground observations (GO) and satellite data (EO) and the seamless and exhaustive use of such data in models at varying spatio-temporal scales. In Hydr’Avatar, ESA funded project, we propose a core of hydrologic/hydraulic modelling fed by diverse and heterogeneous dataset for providing stakeholders with pertinent information on specific problematics in link with continental waters. Here, we focus on 4 geographic zones, being them the Garonne River Basin (GRB), the Maroni River Basin (MRB), the Rhine River (RR) and the joint Niger and Chad River Basins (NCRB). We will deploy distributed (SMASH, Colleoni et al., 2022; Huynh et al., 2024) and/or semi-distributed (MGB, Paiva et al., 2013; Siqueira et al., 2018) modelling at the basin level for representing the vertical energy balance and propagation in main rivers, embedded or not with DassFlow (see Larnier et al., 2023) and/or Telemac (see Nguyen et al., 2023) for fine 1D and/or 2D hydraulics of river streams and floodplains. A large set of information-rich datasets will be used for these set-up, ranging from multi-sourced precipitation products (pure EO, gauged-corrected and model-based), water levels and slope from nadir and large swath altimeters, soil moisture and groudwater, flooded areas, etc. Advanced processing algorithms (FloodSENS, …) will be employed to process those datasets and produce high-level information for model calibration, validation and data analysis. All the data produced by the methods will be encompassed within a 4D dataset. After a strong and thorough validation, the 4D dataset will be employed to answer the scientific questions raised by the identified stakeholders. The platform, integrated to the ESA DESP system, will provide the stakeholders (and all the community) with a clear interface and tools where they will be able to (i) retrieve pertinent information on climate change impact on the hydrological cycle through different pathways simulations, (ii) analyze flood risk and potential damaged areas, (iii) manage transboundary rivers and provide users with adequate guidance and (iv) access insightful comparisons of such EO-derived dataset to dense in situ network. We propose a versatile framework where multi-complexity models and complex datasets for hydrology can be operated in an efficient manner and provide a large range of users with cutting-edge information with strategic, operational and scientific perspectives.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: SaveCrops4EU: an Agricultural DTC Component for Enhanced Decision Making

Authors: Sander Rouwette, Dr. Mauro Sulis, Martin Schlerf, Prof. Dr. Harrie-Jan Hendricks Franssen, Dr. Jochem Verrelst, Brian Maguire, Dr. Ir. Yannick Curnel, MSc Márton Tolnai, Dr. Ir. Louise Leclere
Affiliations: Thales Alenia Space Luxembourg - Digital Competence Center, Remote Sensing and Natural Resources Modelling Group, Luxembourg Institute of Science and Technology, Agrosphere (IBG-3), Forschungszentrum Julich GmbH, University of Valencia, Image Processing Laboratory (IPL), Laboratory of Earth Observation (LEO), Walloon Agricultural Research Centre (CRA-W), ‘Agriculture, territory and technologies integration’ Unit, CropOM Research Team
The SaveCrops4EU Digital Twin Component (DTC) represents a groundbreaking initiative aimed at addressing some of the pressing challenges posed by climate change to the agricultural sector. As society confronts the multifaceted impacts of climate change, the need for innovative solutions is increasingly urgent. The SaveCrops4EU project focuses on creating high-precision digital replicas of cropland ecosystems to enhance agricultural decision-making, contributing significantly to sustainable practices aligned with European policies like the Common Agricultural Policy and the Green Deal. Central to the SaveCrops4EU DTC is the establishment of a framework with monitoring, forecasting, and scenario testing capabilities, to provide data to inform agricultural practices. The initiative aims at offering real-time insights on crop water and nitrogen status, phenological development, and potential yield for major cultivated crops across Europe. These capabilities shall empower farmers and stakeholders to make informed decisions in response to abiotic stressors related to climate change. The DTC uses cutting-edge EO-based monitoring techniques, utilizing a wide variety of sensors and enhanced spatial and spectral resolutions. This approach enables the delivery of richer and more accurate data, allowing for a comprehensive understanding of how environmental factors influence crop health and productivity. The EO-based data will be used to adapt the state trajectory of physically based models and to estimate model parameters, resulting in an overall better agreement between model predictions and reality. In addition, a physically consistent and stochastic dataset generated by a crop module of a land surface model will be utilized to train a suite of machine learning approaches. Critical outputs on crop phenology, water stress, and the carbon-nitrogen cycle, will enhance the predictive capabilities for crop yields and provide stakeholders with actionable insights. Recognizing the importance of inclusivity, the project places significant emphasis on engaging diverse stakeholders from both public and private sectors. Feedback is necessary to ensure that the DTC remains responsive to the evolving needs of end users, facilitating the development of practical tools aligned with real-world agricultural practices. This stakeholder engagement is crucial for developing DTC solutions that are innovative but also adoptable. The architectural design of the SaveCrops4EU DTC allows for organic growth while ensuring maintainability and robustness. The project aims to guarantee long-term scalability and adaptability, accommodating the changing needs of the agricultural sector and technological advances within the associated scientific fields. This strategic approach reinforces the initiative's commitment to sustainability and efficiency. Using four representative Use Cases of major cultivated crops in Europe, this presentation will showcase some functionalities of the proposed approach to bolster our ability to understand, monitor and respond to the complexities surrounding abiotic stressor response. Insights will be given to the technologies and the collaborative approaches integral to the SaveCrops4EU initiative, setting the stage for enhanced decision-making that benefits both agriculture and the environment, ultimately supporting more sustainable food systems in Europe and beyond.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)
Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Forest Digital Twin Component for DesinE

Authors: Dr. Matti Mõttus, Mr. Renne Tergujeff, Dr. Benjamin Brede, Lauri Häme, Dr. Lucie Homolová, Dr. José Ramón González, Dr. Francesco
Affiliations: VTT Technical Research Centre of Finland
VTT Technological Research Centre of Finland together with consortium partners – Terramonitor, German Research Centre for Geosciences (GFZ, Helmholtz Centre Potsdam), Global Change Research Institute of the Czech Academy of Sciences (CzechGlobe), Forest Science and Technology Centre of Catalonia, and Yucatrote – are implementing a Forest Digital Twin Component (DTC) on the Destination Earth (DestinE) User Platform (DESP) with funding from ESA. The implementation, based on the forest digital twin precursor, focuses on forest growth and carbon cycle at the spatial resolution of Sentinel-2 (10 m) using Earth observation data. The processes in an undisturbed forest are slow: changes occur over years to decades, achieving a stable state can take hundreds of years. Twinning such a system requires reliable data on the environmental variables, such as that produced by the Copernicus system and Climate Change DT – one of the core elements of DestinE. The spatial resolution of a forest system, however, needs to be much higher than that of a climate system and approach the size of a single tree. Temporal evolution, on the other hand, is slower. Hence, the output temporal resolution of the forest in the DTC is set to one year with growth simulations based on daily weather data. A yearly cycle is also foreseen in simulating forest management actions. The computational requirements of the system allow it to be run on existing cloud computers without the need of high performance computing. The initial state of the forest will be retrieved from Earth Observation (EO) data. The model will make use of the rapidly increasing volume of EO-based forest variable datasets. In case the user has more robust information for a specific region, custom EO data processing will be available via collaboration with other online systems. The forest maps will be updated with the most recent EO data to account for possible disturbances. At least two physically-based forest growth models will be implemented in Forest DTC. The user needs for a forest digital twin were mapped in a precursor project, which ended in 2021. During Forest DTC, we will update the information on user needs and requirements, and also incorporate new data sources such as hyperspectral imagery and tree maps based on individual tree detection, and include new model components (forest fire fuel, pest damage risk). The system will be modular to include user- or biome-specific growth models, management simulators and end-user extensions. In the forthcoming years, the key questions remain the integration of the largely differing spatial and temporal resolutions of the Forest DTC and those of the many other components of the digital twin of the Earth, such as the atmospheric or hydrological processes. In the future, the system should be available on DESP to all DestinE users for understanding the future of forests.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: Earth Observation based Digital Twin for Resilient Agriculture under Multiple Stressors

Authors: Gohar Ghazaryan, Dr. Maximilian Schwarz, Dr. Philippe Rufin, Jonas Schreier, Florian Pötzschner, Dr. Michele Croci, Dr. Irene Salotti, Prof Paola Battilani, Dr Tobias Landmann, Dr Patrick Hostert, Prof Stefano Amaducci, Claas Nendel
Affiliations: Leibniz Centre for Agricultural Lanscape Research, Remote Sensing Solutions GmbH, Humboldt-Universität zu Berlin, Università Cattolica del Sacro Cuore, International Centre of Insect Physiology and Ecology, Institute of Biochemistry and Biology, University of Potsdam, Potsdam, Germany, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin
In the face of growing agricultural challenges due to climate change and rising global food demand, there is an increasing need for innovative approaches to enhance agricultural resilience. To address this, our contribution presents the Digital Twin for Agriculture (DT) framework, designed to monitor agricultural systems under multiple stressors. By integrating both data-driven and process-based models with Earth Observation (EO) data, the DT framework offers significant advancements over the current state of the art in assessing agricultural systems and understanding the impact of environmental stressors. The approach encompasses four distinct use cases, each illustrating the practical application of DT in monitoring and managing various stressors affecting agricultural productivity. Over the past two decades, and especially since 2018, Germany has experienced several periods of drought with a severe impact on ecosystems and food production. To mitigate these impacts at the national level, the MOdel for NItrogen and Carbon in Agro-ecosystems (MONICA model) simulates drought impacts using meteorological, soil, and crop data, calculating variables such as actual and potential evapotranspiration to identify drought intensity. The DT also integrates high-resolution crop condition monitoring from Sentinel -3, Sentinel-2 and EnMap data, providing insights into crop health drought risk and vulnerabilities. The combination of these simulations with a dedicated drought model enables accurate risk assessments and early warning for agricultural stakeholders. Moreover, the impact of different management practices, particularly irrigation, is evaluated to determine how water use can be optimized to mitigate yield losses under drought conditions. Water management is another critical component of the project, especially as abiotic stressors become more frequent across Europe. The DT framework addresses this challenge through detailed field-level assessments of actual evapotranspiration (ET). Using EO data from Sentinel-2, Sentinel-3, and Landsat, the framework estimates ET, which is validated against ground observations such as data from eddy covariance stations and irrigation records. This high-resolution monitoring facilitates improved irrigation practices, enhancing water use efficiency and reducing the impact of droughts on crop productivity. By assessing crop-specific water needs and understanding the spatial variability of water stress, the DT supports sustainable water resource management, ensuring that irrigation is applied where and when it is most needed. In the Po Valley in Italy, a region that is crucial for national food production, the DT use case addresses the compounded effects of drought and disease outbreaks. The valley's agricultural productivity is at risk due to increasing drought frequencies and a humid climate conducive to disease outbreaks, which can lead to mycotoxin contamination and affect both crop yield and food safety. The DT integrates Sentinel-1 and Sentinel-2 data for early mapping of crops with different models, such as DAISY, light-use efficiency (LUE), mathematical disease model, to derive essential biophysical parameters and simulate soil-water-plant interactions. Additionally, the system uses a high-resolution soil property database and a decade-long dataset of ground truth information to enhance model accuracy. Information on crop conditions, phenology, LAI and biomass accumulation estimated at high spatial resolution from EO data are used as an input to spatialize mechanistic models for crop disease prediction driven by weather data. In Kenya, the DT framework is applied to support smallholder farmers using the push-pull cropping system, a sustainable and climate-smart approach to pest control. Here, EO and Internet of Things (IoT) technologies are used to continuously monitor field conditions. IoT devices collect real-time data on soil moisture, nutrient levels, electrical conductivity, and pest densities, while EO data from Sentinel are used to track crop development. The DT employs AI-driven models to simulate crop growth and vigor, forecast yields, and assess the impact of different management practices on productivity. This approach supports integrated pest management (IPM), enhancing productivity while promoting eco-friendly agricultural practices. These use cases highlight the potential of the Digital Twin for Agriculture. By seamlessly integrating EO data, advanced modeling, and user-oriented tools, the DT framework provides a platform for improving agricultural resilience, optimizing resource use, and supporting sustainable food production.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K1)

Presentation: A first view of the EO-driven digital twin for ice sheets

Authors: Sebastian B. Simonsen
Affiliations: DTU Space, Earthwave, United Kingdom, Lancaster University, United Kingdom, University of Edinburgh, United Kingdom, Katholieke Universiteit Leuven, Belgium, ENVEO, Austria, Geological Survey of Denmark and Greenland, Denmark, Greenland Survey – Asiaq, Greenland
The response of ice sheets and shelves to climate change profoundly influences global human activities, ecosystems, and sea-level rise. As such, ice sheets are a vital component of the Earth system, making them a cornerstone for developing a future Digital Twin Earth. Here, we present the initial steps toward an Earth Observation (EO)-driven Digital Twin Component (DTC) for Ice Sheets, marking an effort to understand and predict the behavior of the Greenland Ice Sheet and Antarctic ice shelves. To meet the diverse needs of stakeholders, DTC Ice Sheets will adopt a modular design comprising 10 Artificial Intelligence/Machine Learning (AI/ML) and Data Science modules. All targeted four initial use cases that will drive the development of DTC Ice sheets. These initial use cases are: (1) Greenland Hydropower Potential: By modeling and monitoring ice sheet hydrology and meltwater runoff, the DTC ice sheets will evaluate Greenland’s renewable energy opportunities and provide actionable insights for sustainable hydropower development. (2) EU Sea Level Response Fingerprint: The DTC Ice Sheets will deliver region-specific insights into how ice sheet mass loss will contribute to global sea level rise, focusing on the implications for coastal infrastructure across Europe. (3) State and Fate of Antarctic Ice Shelves: Through detailed stability analysis, the DTC Ice Sheets will investigate the vulnerability of Antarctic ice shelves to climatic and oceanic changes, shedding light on their role in regulating ice sheet mass loss and global sea level. (4) Enhanced Surface Climate: Leveraging EO data and climatology, the DTC Ice Sheets will improve understanding of surface climate interactions, advancing predictions of feedback loops between ice sheets, the atmosphere, and the ocean. The DTC Ice sheet implementation on the DestinE Core Service Platform (DESP) will consist of interconnected modules to serve the use cases. Still, it will also, when fully implemented, provide a holistic view of an ice sheet digital twin. Hence, DTC Ice Sheets aims to provide high-resolution insights into ice sheets' past, present, and future states, align with stakeholders, and foster interdisciplinary collaboration by interfacing with other thematic Digital Twin Earth systems, such as ocean and coastal processes. The DTC ice sheets will empower stakeholders to explore What-if scenarios to address climate change's impacts and feedback mechanisms. All are found in current state-of-the-art EO data of ice sheets.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Session: B.04.05 Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters - PART 1

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: DeepFuse: Harnessing AI and Earth Observation for Enhanced Flood Inundation Monitoring

Authors: Prof. Dr.-ing. Antara Dasgupta, Dr.-Ing. Rakesh Sahu, Paul Hosch, Prof. Dr. Björn Waske
Affiliations: Institut für Wasserbau und Wasserwirtschaft, RWTH Aachen Universität, Compute Science and Engineering Department, Galgotias University, Institute of Informatics, Universität Osnabrück
Despite the growing number of Earth Observation satellites equipped with active microwave sensors suitable for flood mapping, the observation frequency remains a limitation for effectively characterizing inundation dynamics. Capturing critical events such as the flood peak or maximum inundation extent continues to be challenging, representing a significant research gap in flood remote sensing. However, the rapid expansion of multimodal satellite hydrology archives, coupled with advancements in deep learning, offers a promising avenue to address this limitation in observation frequency. DeepFuse is a scalable data fusion methodology that utilizes deep learning (DL) and Earth Observation data to generate daily flood inundation maps at high spatial resolution. This proof-of-concept study demonstrates the potential of Convolutional Neural Networks (CNNs) to model flood inundation at the spatial resolution of Sentinel-1 (S1). By integrating temporally frequent but coarse-resolution datasets such as soil moisture and accumulated precipitation data from NASA’s SMAP and GPM missions, alongside static predictors like topography and land use, a CNN was trained on flood maps derived from S1 to predict high-resolution inundation patterns. The proposed methodology was applied to two sites, including one in southwest France, focusing on the December 2019 flood event at the confluence of the Adour and Luy rivers, and one in Germany, focusing on the Christmas floods of 2023 in Lower Saxony. Predicted high-resolution flood maps were independently validated using flood masks derived from Sentinel-2, created through the Random Forest Classifier. Initial results indicate that the CNN can generalize some hydrological and hydraulic processes driving inundation, even in complex topographical regions, enabling the bridging of spatiotemporal resolution gaps in satellite-based flood monitoring. We also demonstrate model transferability in space and in time, showcasing the potential of using such approaches in typically data scarce regions. Achieving daily flood monitoring at high resolution will enhance the understanding of spatial inundation dynamics and facilitate the development of more effective parametric hazard re/insurance products, helping to address the flood protection gap.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Enhancing Rapid Tsunami Hazard Estimation: the ALTRUIST Project

Authors: Michela Ravanelli, Elvira Astafyeva, Mattia Crespi
Affiliations: Sapienza University Of Roma\e, IPGP, Université Paris Cité
Tsunamis are among the most devastating geo-hazards, posing significant threats to coastal communities. The financial losses and human toll are immense, as exemplified by the 2004 Sumatra-Andaman earthquake and tsunami, which caused over 200,000 fatalities. Critically, the first tsunami waves reached Sri Lanka's coasts within two hours without any warning issued. This highlights the urgent need for reliable and timely tsunami warning systems, especially in earthquake-prone regions. Complementary capabilities that enhance or support existing warning systems could significantly improve coastal safety. However, failures in tsunami warning systems - only relying on seismic and sea level data - over the past 15 years underline the necessity of exploring new paradigms for ocean monitoring and tsunami hazard estimation to reinforce traditional observational techniques. Within the last 30 years GNSS (Global Navigation Satellite Systems), thanks to its dense deployment and temporal resolution, played a pivotal role in analyzing the spatial and temporal dynamics of geo-hazards, capturing variations across time scales from decades to sub-seconds and spatial scales from local to global. Specifically, in the past decade, GNSS Ionospheric Seismology has made remarkable progress in detecting earthquake and tsunami signatures in the ionosphere, Earth's atmosphere upper part. By analyzing GNSS-TEC (Total Electron Content) observations, this field studies the ionospheric response to geo and human-induced hazards. Tsunamis and earthquakes generate acoustic and gravity waves (AGWs), which, due to the density decreasing of the atmosphere at higher altitudes, can propagate to the ionosphere, causing TEC disturbances. This enables the remote sensing of the ionosphere, allowing the imaging of Total Electron Content and providing valuable insights for geo-hazard assessments. The ALTRUIST (totAL variomeTry foR tsUnamI hazard eStimaTion) project, perfectly fits into this context, aiming to improve the reliability and accuracy of real-time tsunami warning systems leveraging the GNSS Total Variometric Approach (TVA) methodology. Developed at Sapienza University of Rome, TVA combines two innovative algorithms: VADASE (Variometric Approach for Displacement Analysis Stand-Alone Engine) and VARION (Variometric Approach for Real-Time Ionosphere Observation) [1]. These algorithms process the same real-time GNSS data streams to simultaneously estimate: ground motion, including co-seismic displacements and ionospheric TEC disturbances caused by earthquakes and tsunamis. This dual-layer capability allows TVA to bridge geospheric observations and support traditional tsunami warning systems. Indeed, the ground motion analysis (through VADASE) provides velocity and displacement data, estimating the magnitude and direction of ground motion critical for seafloor displacement evaluation and tsunamigenic potential assessment. While the ionospheric TEC monitoring (through VARION) tracks TEC anomalies, offering insights into vertical sea surface displacement and validating tsunami potential within 10 minutes of seismic rupture. The TVA methodology was tested in a real-time scenario during the 2015 Mw 8.3 Illapel earthquake and tsunami, demonstrating its potential to enhance tsunami genesis estimation and contribute significantly to preparedness workflows and disaster risk reduction strategies. Currently, TVA is being implemented in real-time through the ALTRUIST project, supported by the AXA Research Fund and UNESCO-IOC within the United Nations Ocean Decade [2]. ALTRUIST is being piloted using the GNSS network of the Observatoire Volcanologique et Sismologique de Guadeloupe (IPGP) in the French Caribbean. ALTRUIST integrates a front-end dashboard for real-time and interactive data visualization and a modular, scalable back-end layer for real-time and historical data management, allowing easy integration with external modules to expand system capacity. This architecture supports simultaneous monitoring of ground motion and ionospheric TEC disturbances, marking a breakthrough in multi-sphere geospheric analysis. ALTRUIST leverages the full capabilities of multi-constellation GNSS systems, including Galileo, to enable comprehensive global monitoring, even in remote and underserved regions. By incorporating Galileo's advanced features such as enhanced signal accuracy and robust availability, ALTRUIST ensures reliable and real-time applicability wherever GNSS data access is available, significantly enhancing tsunami early warning systems worldwide. ALTRUIST represents a significant leap in GNSS technology, transitioning from academic research to practical applications. Its cost-effective implementation leverages existing GNSS networks, addressing sustainability challenges. The project's scalability makes it particularly relevant for regions like the South Pacific, where traditional warning systems often fall short. Finally, by providing additional resources for tsunami hazard estimation and integrating multi-geospheric observations, ALTRUIST establishes a new benchmark for real-time tsunami hazard assessment. It has the potential to complement traditional tsunami early warning systems, strengthen collaboration within global tsunami alert frameworks, and contribute to enhancing the safety of coastal communities worldwide. [1] Ravanelli M. et al. (2021). GNSS Total Variometric Approach: First Demonstration of a Tool for Real-Time Tsunami Hazard Estimation, Scientific Reports, 11(1). [2] https://axa-research.org/funded-projects/climate-environment/mitigating-tsunamis-threats-and-destructive-impacts-through-enhanced-navigation-satellite-system
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Advancing Drought Resilience in South Africa: The ANIN Project and its Earth Observation-Based Early Warning System

Authors: Juan Suarez, Carlos Domenech, PhD Èlia Canoni, PhD Beatriz Revilla-Romero, PhD Pablo Torres, Mr Jesús Ortuño, Mr Mxolisi Mukhawana, PhD Ndumiso Masilela, PhD Christina Botai, Mr Jaco de Wit, Mr Thomas Tsoeleng, Mr Morwapula Mashalane, MSc Sibonile Sibanda, PhD Andy Dean, Mr Vangelis Oikonomopoulos, Mr Emile Sonnenveld, Wai-Tim Ng, MSc Fabrizio Ramoino, PhD Clement Albergel
Affiliations: GMV, Department of Water and Sanitation, South Africa Weather Service, South Africa National Space Agency, Hatfield Consultants Africa, Hatfield Consultants, AgroApps, VITO, ESA-ESRIN, ESA-ECSAT
The ANIN Project, also referred to as the South Africa Drought Monitoring National Incubators, is an initiative financed by the European Space Agency (ESA) Earth Observation for Africa (EO Africa) programme. This project sought to bolster South Africa's resilience to droughts by creating a comprehensive drought early warning system specifically designed to meet the needs of South African stakeholders. The aim was to develop an advanced Earth Observation (EO)-based solution that would enable the country to better prepare for, mitigate, and respond to droughts, an escalating concern in the face of increasing climate variability in southern Africa. A key outcome of the project is an open-source drought monitoring system that uses EO data to generate drought indices. This system provides near real-time insights into the state of drought conditions across South Africa. Recognising the multifaceted nature of drought, ANIN employs a multi-pronged approach to evaluate various drought types, including meteorological, soil moisture (agricultural/ecological), and hydrological droughts. This approach is based on the understanding that these drought types are interconnected and propagate through the hydrological cycle. Meteorological drought is evaluated using the Standardised Precipitation Index (SPI) and the Standardised Precipitation-Evapotranspiration Index (SPEI). The SPI, a widely used indicator, compares current precipitation accumulations to historical data, revealing precipitation deficits. The SPEI, in contrast, incorporates both precipitation and potential evapotranspiration (PET), reflecting the impact of temperature on water demand. Both the SPI and SPEI are calculated at various time scales, enabling the detection of both short-term and long-term drought conditions. Soil moisture drought, which has direct implications for agriculture and ecosystems, is monitored in ANIN using the Vegetation Condition Index (VCI) and the Combined Drought Indicator (CDI). The VCI compares current Normalised Difference Vegetation Index (NDVI) values to historical ranges, indicating the health of vegetation and levels of stress. The CDI integrates SPI, Soil Moisture Anomaly (SMA), and FAPAR anomaly data to provide a comprehensive assessment of agricultural drought risk and recovery stages. Hydrological drought is monitored using the Standardised Streamflow Index (SSFI) and the Standardised Groundwater Index (SGI). The SSFI assesses streamflow anomalies in relation to long-term averages, providing insights into river discharge conditions. Similarly, the SGI evaluates groundwater level anomalies, reflecting the state of groundwater resources. The ANIN system is fully integrated into the SANSA Digital Earth South Africa (DESA) infrastructure. This integration enables seamless management and analysis of EO data for drought monitoring. The collaborative nature of ANIN was crucial in achieving these outcomes. From the outset, the project was designed to incorporate local knowledge and needs (data, analysis, infrastructure gaps) and to build local capacity. This was achieved by involving South African partners in the co-design and co-development of the system, culminating in its deployment within the SANSA infrastructure. ANIN has significantly enhanced South Africa's drought monitoring capabilities by providing a more precise and efficient method for tracking drought conditions in near real-time. User feedback indicates that the system's drought indices accurately reflect conditions across different regions, supporting informed decision-making in water management, agriculture, and disaster response. The impact of ANIN extends beyond data provision; the collaboration between European and South African partners has empowered local stakeholders to independently manage and utilise EO technology. The success of the project has created opportunities for potential upscaling to encompass the Southern African Development Community (SADC) region. There is considerable interest in expanding ANIN's reach to address similar environmental challenges faced by the sixteen SADC member countries. This expansion would require establishing partnerships with relevant agencies, customising the system for regional needs, and establishing a unified platform for drought monitoring across Southern Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Integrating Remote Sensing and Tsunami Numerical Simulations for Building Damage Mapping

Authors: Bruno Adriano, Shunichi Koshimura
Affiliations: International Research Institute of Disaster Science, Tohoku University
Assessing damaged buildings after a devastating disaster is essential for prompt and effective rescue and relief efforts. Recently, combined approaches based on machine learning and earth observation technologies have increasingly performed well in automatic damage recognition in large affected areas. However, machine learning methods require many human expert-labeled training samples, which often need to be made available. Collecting them after a disaster strikes is not feasible because affected areas frequently become isolated or present dangers to early field survey missions. To address this challenge, previous studies have leveraged existing benchmark datasets built using previous disasters and trained machine-learning models that are expected to perform well when applied to new disaster events. Although such approaches have shown success in some cases, the generalization ability of these machine learning models still needs improvement, often requiring collecting a minimum amount of training samples from the affected areas to guarantee acceptable performance. In this context, this study presents another approach to addressing the challenge of collecting ground truth samples soon after a disaster occurs. Numerical simulation using physics-based computational models can simulate the intensity of given disasters, such as peak ground acceleration in earthquake events and inundation depth in the case of flood disasters. This work introduces a novel building damage mapping method for tsunami disasters that uses disaster intensity as complementary information to train a machine learning classifier. In the absence of training data, the primary assumption is that disaster intensity is correlated to the degree of building damage and can be used as additional data in a weakly supervised scheme. We evaluate the performance of our proposed method on two tsunami disasters, namely the 2011 Tohoku Tsunami and the recent 2024 Noto Peninsula Tsunami, both events in Japan. The experimental results showed that our method performs similarly to a fully supervised scenario in which training samples are available.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: A Novel Two-Stage Approach for Buildings and Roads Damage Assessment in Remote Sensing Imagery

Authors: Lorenzo Innocenti, PhD. Edoardo Arnaudo, Jacopo Lungo
Affiliations: LINKS Foundation
Natural disasters, such as earthquakes, hurricanes, and floods, can cause widespread destruction, disrupting communities and critical infrastructure. The rapid assessment of infrastructure damage following natural disasters is crucial for effective emergency response and resource allocation. Traditionally, damage assessment has relied on satellite imagery and human analysis. This approach has two main drawbacks: the coarse time resolution of satellite imagery and the time-consuming nature of manual image analysis. Machine learning (ML) models can significantly speed up the image analysis process, which is critical in emergency situations, and can also be of aid to manual assessment by can highlighting areas of concern that might be overlooked by humans, increasing the overall effectiveness of damage assessment. Unmanned aircraft systems (UAS) can further expedite the process by capturing high-resolution images without the delay associated with satellite imagery. These aerial platforms provide high resolution imagery that benefits both human analysts and ML models, enabling more precise and timely information about the extent and nature of infrastructure damage. Our proposition with this study is a neural network (NN) model designed for damage assessment in disaster scenarios. The model takes as input two images: a pre-disaster image from a very high-resolution (VHR) satellite, such as Maxar, and a post-disaster image, which can be either from another VHR satellite or a downscaled aerial image. By analyzing these images, the model generates a map that highlights the locations and extent of damage to the infrastructure present within the area. To address the problem of damage assessment on both buildings and roads, we propose a novel two-stage approach combining infrastructure segmentation and change detection. The first stage consists of an infrastructure segmentation model that classifies each pixel in remote sensing imagery into three categories: background, road, or building. While there are existing datasets focused on damage assessment of buildings, there is a notable lack of datasets for road damage assessment and even road remote sensing segmentation. The gap is significant because damaged road infrastructure can severely hamper emergency response efforts by cutting off affected areas from aid and connectivity after a natural disaster. Therefore, developing comprehensive datasets that include road damage assessment is equally important than those focused solely on buildings. To overcome the dataset limitation, we develop a novel annotation pipeline utilizing state-of-the-art foundation models to automatically generate training data. Specifically, we employ the Microsoft Buildings Footprint dataset and the Microsoft Road Detection dataset as prompts for a segmentation foundation model to generate a training label images. The Microsoft Buildings Footprint dataset is a collection of building footprints derived from satellite imagery, which includes over 1.4 billion building footprints detected from Bing Maps imagery between 2014 and 2024, using data from sources like Maxar, Airbus, and IGN France. The Microsoft Road Detection dataset consists of road detections derived from Bing Maps aerial imagery between 2020 and 2022, which contains approximately 48.9 million kilometers of roads. They are both freely available under the Open Data Commons Open Database License (ODbL). To generate image labels from infrastructure footprints, we utilize the Efficient Segment Anything Model (ESAM), an advanced iteration of the Segment Anything Model. ESAM is a highly versatile general segmentation model, supporting both point and text prompts. This enables users to perform segmentation tasks by marking specific points on an image or providing text descriptions. In our application, we use points from the buildings and roads dataset, along with a series of words representing roads and buildings. This annotated dataset was then used to train a lightweight segmentation model tailored for infrastructure identification. The model consists in a ConvNeXt-based neural network, which is a modern convolutional neural network (CNN) designed for high performance in vision tasks. It follows a U-Net architecture with skip connections, allowing it to extract hierarchical features in the encoder and reconstruct segmentation in the decoder using higher resolution data from the skip connections. Our version also features an encoder pretrained on ImageNet, which improves its performance and reduces the training time required for our task. The second stage employs a change detection model trained on an existing remote sensing change detection dataset. For this, we used a similar model, also based on ConvNeXt with a pretrained encoder. The model first extracts features from both pre- and post-event images, then the decoder takes as input the absolute difference of the pre- and post-event features, allowing the change detection to work both ways (i.e., detecting changes where a building is present in the pre-event image but not in the post-event image, and vice versa). For training the change detection model, we utilized the SYSU-CD dataset. This dataset contains 20,000 pairs of 0.5-meter resolution aerial images, each sized 256×256 pixels, taken between 2007 and 2014 in Hong Kong. The types of changes captured in the dataset include newly built urban buildings, suburban expansion, groundwork before construction, road expansion, and similar. By combining the change detection scores with the infrastructure segmentation information our system can identify and categorize building damage: the system takes the segmented buildings and roads from the pre-disaster images, checks the damage assessment score and, if the pixels inside the infrastructure have a high score, they are marked as damaged or destroyed. This tool is part of the OVERWATCH project, which aims to provide an immersive and intuitive operational crisis asset management tool for public authorities responsible for civil safety and emergency services. Our tool, integrated into the project platform, provides a web-based dashboard that incorporates an automatic Earth observation-based pipeline. Public authorities can upload pre- and post-disaster images, which are then processed to generate infrastructure damage scores. This process is entirely automated, requiring no human interaction beyond the initial image upload. The platform ensures that emergency responders have on-demand access to accurate and timely damage assessments, enhancing decision-making and situational awareness during crises. This research was conducted within the framework of the Horizon EU OVERWATCH project (Grant ID. 101082320).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall K2)

Presentation: Investigating all-weather rapid flood mapping with Sentinel-1 Ground Range Detected and Single Look Complex data.

Authors: Nikolaos Ioannis Bountos, Maria Sdraka, Angelos Zavras, Ilektra Karasante, Andreas Karavias, Themistocles Herekakis, Angeliki Thanasou, Dimitrios Michail, Professor Ioannis
Affiliations: National Technical University Of Athens, Harokopio University of Athens, National Observatory of Athens
Global floods, driven by climate change, pose significant risks to human lives, infrastructure, and ecosystems. Recent disasters in Pakistan, and Valencia, emphasize the pressing need for accurate flood mapping to support recovery efforts, assess vulnerabilities, and improve preparedness. The Sentinel missions deliver abundant remote sensing data, presenting a vital opportunity to address this challenge. Sentinel-1's Synthetic Aperture Radar data is particularly well-suited, offering all-weather, day-and-night imaging capabilities ideal for the task. The significant advancements in deep learning which have already provided major milestones in both computer vision and remote sensing present a powerful opportunity to address this critical challenge. Its application, however, for flood mapping is limited, mainly due to the lack of large curated datasets. To address this gap, we curate time series data from Sentinel-1 SAR imagery for 43 flood events worldwide, manually annotated by SAR experts. The dataset includes two SAR products: a) Ground Range Detected (GRD) SAR, optimized for flood mapping, and b) minimally processed Single Look Complex (SLC) SAR, retaining both phase and amplitude signals. These products are paired with reference annotation maps classifying each pixel to one of the following categories: “Flood”, “Permanent water”, “No water”. We name the resulting dataset “Kuro Siwo”. We enhance Kuro Siwo with an extensive unlabeled set of SAR samples augmenting both products to explore the advances in large-scale self-supervised pretraining for remote sensing [1,2]. The annotated dataset features 67,490 time series and 202,470 unique SAR samples, stored as 224 × 224 tiles, with rich metadata such as acquisition dates, climate zones, and elevation information. Combined, the full dataset offers 533,847 time series and 1,601,511 unique SAR samples, making it a groundbreaking resource for flood mapping and beyond. Building on Kuro Siwo we construct a framework to evaluate the capabilities of GRD and SLC products for rapid flood mapping by developing a comprehensive benchmark of state-of-the-art models inspired from semantic segmentation, change detection, and temporal modeling domains. Our benchmark includes both convolutional and transformer-based architectures e.g U-Net[3] and UPerNet[4], implemented with various backbone variants like ResNet[5] and Swin Transformer[6] families, providing strong baselines for future research. As expected, heavily processed GRD data are better suited for rapid flood mapping with conventional real-valued architectures, achieving ~83.85% F1 score for the binary water/no water classification and 80.12% and 78.24% for the flood and permanent water categories respectively. However, our experiments demonstrate that deep learning models can effectively classify even unrefined SLC data, when paired with high-quality annotations, like those in Kuro Siwo. For example,, a standard U-Net with a ResNet18 backbone achieves an F-Score of ~79.94% on binary water detection, and ~71.20% and ~76.76% on flood and permanent water categories, respectively. These results are particularly noteworthy, as the models in our benchmark were not specifically optimized for SLC’s unique characteristics. Investigating SAR’s complex-valued data with methods tailored to this domain is a promising avenue for future work. This comparative study sets a high standard for future GRD and SLC-based methods for the critical application of rapid flood mapping. [1] Cong, Yezhen, et al. "Satmae: Pre-training transformers for temporal and multi-spectral satellite imagery." Advances in Neural Information Processing Systems 35 (2022): 197-211. [2]Bountos, Nikolaos Ioannis, Arthur Ouaknine, and David Rolnick. "FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models." arXiv preprint arXiv:2312.10114 (2023). [3] Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. [4] Xiao, T., Liu, Y., Zhou, B., Jiang, Y., and Sun, J. (2018). Unified perceptual parsing for scene understanding. Proceedings of the European conference on computer vision (ECCV), pages 418–434 [5] He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778 [6] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Session: C.06.01 Sentinel-1 mission performance and product evolution

The Sentinel-1 mission is a joint initiative of the European Commission (EC) and the European Space Agency (ESA) comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band synthetic aperture radar imaging, enabling them to acquire imagery regardless of the weather. The C-band SAR instrument can operate in four exclusive imaging modes with different resolution (down to 5 m) and coverage (up to 400 km). It provides dual polarization capability, short revisit times and rapid product delivery. Since the launch of Sentinel-1A and Sentinel-1B, respectively in 2014 and 2016, many improvements were brought to the mission performances and to the products evolved on many points. Sentinel-1B experienced an anomaly which rendered it unable to deliver radar data in December 2021, and the launch of Sentinel-1C is planned for 2023. This session will present the recent improvements related to a) the upgrade of the products characteristics, performance and accuracy, b) the better characterization of the instrument with the aim to detect anomalies or degradation that may impact the data performance, c) the anticipation of the performance degradation by developing and implementing mitigation actions and d) the explorative activities aiming at improving the product characteristics or expanding the product family to stay on top of the Copernicus Services evolving expectations.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Enhancing Sentinel-1 Soil Moisture derived Production Validation: Upscaling Methodologies and Insights from the Copernicus GBOV Service

Authors: Ana Pérez-Hoyos, Rémi Grousset, Christophe Lerebourg, Dr Marco Clereci, Nadine Gobron, Ernesto Lopez-Baeza
Affiliations: Albavalor, ACRI-ST, European Commission Joint Research Centre
The Copernicus Ground-Based Observations for Validation (GBOV) service (https://gbov.land.copernicus.eu), aims to develop and disseminate robust in-situ datasets from a network of ground-based monitoring sites. These datasets enable systematic and quantitative validation of Earth Observation (EO) products generated by the Copernicus Land Monitoring Service. The GBOV service provides two types of datasets: Reference Measurements (RMs), consisting of raw ground observations from diverse contributing networks, and Land Products (LPs), which are upscaled variables specifically processed for EO validation purposes. This presentation focuses on one of the seven GBOV Land Products, soil moisture (SM), identified as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS). Specifically, it emphasizes the collection and processing of Surface Soil Moisture (< 5 cm depth) to generate quality daily soil moisture RMs (RM-10) and precipitation (RM-11) with hourly temporal resolution. These datasets are compiled from over 40 globally distributed sites, covering a more comprehensive range of land cover types and climatic conditions. A core component of this work involves developing an upscaling methodology to transform in-situ RM soil moisture data (RM-10) into a Land Product (LP-6). The LP-6 dataset provides surface soil moisture (SSM) values aggregated over a 1 km grid aligned with the 0.1° × 0.1° Copernicus grid. The upscaling approach incorporates Sentinel-3 (Sea and Land Surface Temperature Radiometer) SLSTR-derived parameters, namely, the Normalized Difference Vegetation Index (NDVI), Land Surface Temperature (LST), and the Temperature-Vegetation Dryness Index (TVDI). Three modelling scenarios were evaluated: i) TVDI as a stand-alone proxy for soil moisture, ii) NDVI and LST as input variables, and iii) a more comprehensive model incorporating all three proxies (i.e., NDVI, LST and TVDI). A wide range of statistical and machine learning (ML) algorithms were evaluated to establish robust transfer functions that model the relationship between soil moisture measurements and remote sensing variables. These algorithms captured both linear and non-linear patterns in the data, They included linear regression, polynomial regression (2nd and 3rd Degree), logarithmic models, interaction terms, regularized regression (i.e., ridge regression), random forest, extreme gradient boosting (XGBoost) and support vector machine. Additionally, a categorical monthly dummy variable was incorporated into linear, polynomial and logarithmic models to account for seasonality. Model performance was evaluated using two key metrics: the coefficient of determination (R²), which measures the proportion of variance in soil moisture explained by the model, and the Root Mean Squared Error (RMSE), which quantifies the average magnitude of prediction error. Results indicated that the most effective model was a second-degree polynomial algorithm incorporating a monthly temporal component, which proved critical for capturing seasonal patterns and significantly enhancing model accuracy. Logarithmic and 3rd-degree polynomials models showed similar results. While ML models demonstrated strong performance, they yielded lower R2 values when tested on independent datasets than polynomial models. This suggests that simpler, more straightforward approaches may provide more reliable predictions for soil moisture upscaling. Performance was site-specific, with high R2 values (>0.6) observed at locations such as Litchfield, Valencia, Saint Felix, and Montaut. In contrast, lower R2 values (<0.3) were recorded at sites like Barlad, Calarasi, Darabani, and Tereno. Final models were validated against independent datasets such as ECMWF ERA-5, providing a robust comparison and ensuring reliable validation of the models. A first assessment of CLMS Surface Soil Moisture 1 km product obtained from Sentinel-1 C-band SAR backscatter will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: An overview of Sentinel-1 instruments status, L1 product performance and evolution

Authors: Muriel Pinheiro, Antonio Valentino, Guillaume Hajduch, Pauline Vincent, Andrea Recchia, Martin Steinisch, Riccardo Piantanida, Kersten Schmidt, Christoph Gisinger, Jakob Giez
Affiliations: ESA/ESRIN, Starion Group, CLS, Aresys, DLR-HR, DLR-IMF
he Copernicus Sentinel-1 (S-1) mission ensures the continuity of C-band SAR observations over Europe. The routine operations of the constellation are on-going and performed at the maximum capacity allowed by the Sentinel-1 A unit. The Sentinel-1 B unit has not been operational since December 2021 and the new Sentinel-1C unit is going to be launched in December 2024. The mission is characterized by large-scale and repetitive observations, systematic production and free and open data policy. Sentinel-1 data are routinely used by Copernicus and many other operational services, as well as in the scientific and commercial domain. A key aspect of the Copernicus program is the constant provision of open and free high-quality data. This requires long term engagement to carefully monitor, preserve, and improve the system and product performances. The Sentinel-1 SAR Mission Performance Cluster (SAR-MPC) is an international consortium of SAR experts and is in charge of the continuous monitoring of the S-1 instruments status, as well as the monitoring of the quality of the L1 and L2 products. This is typically done by analyzing the variation of key parameters over time using dedicated auxiliary products or standard data available to the public, e.g., antenna monitoring trough RFC products, radiometry and geolocation using standard data and dedicated Fiducial Reference Measurements (FRM). The SAR MPC is also responsible to implement any actions necessary to prevent or minimize quality degradation, e.g., in the event of instrument anomaly. This includes the update of processor configuration files and updates of the S-1 Instrument Processing Facility (IPF) algorithms and/or their implementation [1]. A Sentinel-1A platform anomaly impacting the thruster in charge of the orbit inclination control occurred in 2024. After the event, ESA has decided in agreement with the European Commission to suspend the orbit inclination control manoeuvres, for spacecraft safety reasons. This decision has further consequences on the Sentinel-1 orbit which had, since the beginning of the mission, been maintained within 200m RMS diameter tube. The impact in interferometry, in particular due to the increased perpendicular baselines, have been analysed and considered acceptable. Starting mid-April 2024, the orbit inclination is naturally evolving following a yearly pattern further modulated by a secular drift. The monitoring of baseline and burst synchronization continues to be done routinely and has been evolved to better track the effects of the change in orbit control, e.g., to better identify dependencies with latitude for baseline and burst synchronization. The monitoring of burst synchronization has also been extended to verify variations within the data-take, which are along-track baseline dependent, and to include verification using information from the Orbit (on-board) Position Schedule (OPS) angle, which allows to monitor the capability of the instrument to perform synchronization. The monitoring of both the SAR antenna health status and of the SAR instrument is carried out exploiting the dedicated auxiliary products and ensures to minimize degradation of SAR data quality originated by instrument aging or element’s failures. In the case of antenna health, the analysis is performed using the RF Characterization (RFC) products which allows to assess the status of the 280 TRMs composing the SAR antenna. In April 2024 the antenna monitoring of the antenna error matrices obtained using S1A RFC products identified a failure of a single antenna TRM module of Sentinel-1A. The identification of the anomaly was followed by a dedicated quality impact assessment that confirmed no appreciable degradation of the performance. A small degradation of one element in H pol of Sentinel-1A has been observed since January 2021 (loss of about 3 dB gain in Rx and 1 dB gain in Tx), but with no impact in the data quality at the moment. In general the antenna monitoring shows that there has been no considerable degradation since 2017 for Sentinel-1A. The instrument status is monitored through the internal calibration and noise products, which can be used, for example, to generate time series of the PG product. Currently analysis shows that the overall behavior of both instruments is quite stable, with the slope of the PG gain trend below 0.1 dB/year for both units. The radiometric and geolocation performance of L1 products is performed using standard Sentinel-1A data and are also stable and within specifications. In particular, the DLR calibration site composed of transponders and corner reflectors is used to assess the stability of the radiometry, and current analysis including data from 2017 until 2024 shows a mean value of -0.1 dB and standard deviations below 0.25 dB for both units. In addition to the point-target analysis, gamma measurements over uniformly distributed targets like rainforest are also used to assess the relative radiometric accuracy of Sentinel-1 products. Evaluating the flatness of such profiles, updates of the antenna patterns and processing gains are performed in order to ensure radiometric accuracy. The geolocation accuracy is monitored using dedicated acquisitions over additional corner reflector calibration sites such as Surat Basin, Australia, and includes the compensation of known instrument and environmental effects, e.g., propagation through troposphere and ionosphere or solid Earth deformation signals [2]. Current analysis of the point targets shows an absolute mean value of less than 20 cm in azimuth and less than 10 cm in range for the Sentinel-1A unit, and respective standard deviations of less than 10 cm and 30 cm. The regular monitoring also shows a few centimeters of impact by the presently very high solar activity on Sentinel-1 geolocation performance, which is attributed to accuracy limitations in the ionospheric delay corrections applying the GNSS-based Total Electron Content (TEC) maps. Toward the beginning of 2024, Doppler jumps larger than usual have been observed between different star-tracker (STT) configurations (up to 50Hz). A STT re-calibration has then been proposed and implemented in June 2024 and shows positive results in terms of Doppler time series continuity. In general, with the only exception of a small degradation of the orbital tube of Sentinel-1A, the SAR-MPC monitoring activities show that the performance is nominal and stable. The quality of the L2 products is also continuously monitored by the SAR-MPC (see dedicated presentation in [4]). The IPF has also continuously evolved to improve the data quality and its usability. The latest version is IPF 3.9, which has been deployed on November 25th, 2024. Main evolutions which have been included in the latest IPF versions deployed are: - Support of specific timeline for S-1C and D - Annotation of used L0 A/C/N products in the manifest - Correction of the ANX date annotated in the manifest - Improve the robustness of annotation of burst ID - Compensate for the effect of RFI in the denoising vector annotation - Correction and calibration of denoising vectors Refer to https://sar-mpc.eu/processor/ipf/ for a full list of deployed changes. Together with the deployment of S1-IPF v3.9, the configuration of the SW module for Radio Frequency Interferences (RFI) detection and mitigation has been updated. The change consists in a fine tuning of the parameters aimed at reducing the mis-detection, which currently typically affects less than 2% of the slices. SAR-MPC also maintains a set of tools to support its own activities of monitoring and expert analysis of Selntonl-1 data. Recently a new tool has been developed that has two main purposes: - to generate the engineering products (L0N) that are needed to exploit rank echoes for the de-noising of products acquired before 2018, and - to generate accurate de-noising vectors starting form L1 products and exploiting the updated algorithms and the latest calibration data. The tools will be made available to the public, e.g., to support ad hoc generation of noise vectors for archive products. [1] Sentinel-1 Annual Performance Report 2023, on-line document, https://sentiwiki.copernicus.eu/web/document-library [2] R. Piantanida et al., "Accurate Geometric Calibration of Sentinel-1 Data," EUSAR 2018; 12th European Conference on Synthetic Aperture Radar, 2018 [3] Franceschi et al., “Operational RFI Mitigation Approach in Sentinel-1 IPF”, submitted to EUSAR 2022 [4] A. Benchaabane,” Sentinel 1 Level 2 Ocean Products Performance Monitoring: current status and evolutions”, submitted to the LPS2022 Acknowledgement: The results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG, funded by the EU and ESA. The views expressed herein can in no way be taken to reflect the official opinion of the European Space Agency or the European Union.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Sentinel-1 Level-2 Ocean Products Performance Monitoring: current status and short-term evolutions

Authors: Amine Benchaabane, Romain Husson, Guillaume Hajduch, Charles Peureux, Pauline Vincent, Antoine Grouazel, Alexis Mouche, Frédéric Nouguier, Geir Engen, Anna Fitch, Harald Johnsen, Yngvar Larsen, Fabrice Collard, Gilles Guitton, Beatrice Mai, Andrea Recchia
Affiliations: CLS, IFREMER, NORCE, OceanDataLab, ARESYS
Introduction Copernicus Sentinel-1 is a constellation of two C-band Synthetic Aperture Radars (SAR) operated by the European Space Agency, launched in March 2014 for S1A, flying in tandem with S1B since 2016 until December 2021 when S1B failed operating. Their Level-1 products consist of High-Resolution (HR) Radar images distributed either in GRD (Ground-Detected) or SLC (Single-Look Complex) processing. The knowledge acquired since decades on SAR acquisition over the ocean allows for the measurement of sea surface wind vectors, wave spectra and radial velocity, which are all provided in a single Level-2 product referred to as OCN products (OCeaN). L2 OCN data quality is constantly evolving thanks to radiometric and algorithmic improvements performed either at Level-1 or Level-2 processing steps. This work aims at presenting the strategy, current performances and short-term evolutions of the so-called Sentinel-1 L2 OCN products from the Mission Performance Cluster Service (S-1 MPC). This activity both depends on and benefits from evolutions of the Level-1 products quality (presented in a dedicated presentation). The first results and performances for the recently launched Sentinel-1C be provided (not available when writing this abstract). Ocean Wind The estimation of wind vectors is made possible by the knowledge of GMFs (Geophysical Model Function) relating the calibrated NRCS to the wind in a statistical way. These functions are precisely known and have been evolving for decades. Their knowledge allows us to estimate a SAR wind vector at a resolution of 1 km from a Bayesian inversion using the co-polarized channel and an a priori provided by a Numerical Weather Prediction (NWP) model: ECMWF Integrated Forecasting System (IFS). The validation strategy relies on massive comparisons with atmospheric model forecasts at various resolutions: ECMWF 10 km 3-hourly forecast, NCEP 10km 3-hourly forecast, AROME and ARPEGE. Numerous statistical diagnostics are developed that monitor both the product performances on wind speed and direction. Strategies are being developed for the incorporation of in-situ data in the Sentinel-1 CAL/VAL chain, especially, weather buoy data, other satellite missions. The impact of the radiometric calibration is also specifically investigated to assess the impact on the wind products. Radiometric discontinuities, particularly visible at subswath edges or within the subswath overlap using SLC data, are quantified over a large dataset and can lead to Sigma0 inconsistencies of the order of several 0.1dB, equivalent to several meter per seconds for the downstream wind speed. On the algorithmic side, the recent improvements for the operational products concern: - Increase of update frequency of the ECMWF wind forecast used as a first guess in the ocean wind measurement process, - Activation of RFI mitigation applied on input Level 1 products (dedicated presentation on RFI mitigation is planned during the symposium), - Ad hoc calibration of GR2 for HH and HV aiming to compensate for general bias on wind speed. On the algorithmic side, the short-term improvements for the operational products concern: - The flagging of rain impacted regions that can otherwise affect the wind estimates, - The bright target removal for cross-polarization channel to prepare its inclusion in the wind inversion, - An update of the Bayesian cost function used for wind inversion, - The choice of the polarization ratio for acquisitions in HH, - The addition of new variables in the L2 OCN products to enable a potential homogeneous sigma0 re-calibration of past and recent products. Ocean Surface Waves The Sentinel-1 derived wave measurements provide 2-D ocean swell spectra (2-D wave energy distribution, function of wavelength and direction) as well as classical integrated parameters such as significant wave height for the observed swell partition, dominant wavelength, and direction. Several dedicated methodologies have been set up for validation. (i) For each Sentinel-1 Ocean swell spectrum measurement, a directional spectrum is systematically produced with co-located WAVEWATCH III numerical wave model (ii) As very few acquisitions are available in coastal areas where in-situ buoys are deployed, we perform a dynamical co-location. This method allows propagating wave measurements acquired by Sentinel-1A in open ocean up to the closest in-situ buoy for comparison. Such methods are particularly interesting for cross-comparison and inter-calibration of swell measurements from Sentinel-1A and B, or from ascending and descending orbits. (iii) More classical methods such as co-location against altimeters are also used and presented. We present here the calibration and validation methodology, the main improvements put in place in the last years and the planned improvements in the coming months. On the algorithmic side, the recent improvements for the operational products concern: - The implementation of a new methodology to calibrate the Simulated Cross Spectra used in the quasi-linear inversion of the swell, to calibrate the SAR Modulation Transfer Function (MTF) as a function of the SAR-derived wind speed, and to tune the wave partition quality flags, - The revisit of the two algorithms used to estimate the wind sea component of the significant wave height (Hswind sea) and the so-called “total Hs” from the Wave Mode acquisitions using Machine Learning techniques, On the algorithmic side, the short-term improvements for the operational products concern the production of inter-burst and intra-burs cross-spectra with associated variables for the TOPS mode (IW and EW). This is paving the way to the estimation for directional ocean wave spectra for this specific coastal acquisition mode. Radial Velocity The so-called radial velocity (RVL) is related to the velocity of the scatters in the line-of-sight of the SAR antenna. Over ocean, a strong dependence on surface current and winds-waves is expected. Unfortunately, the Sentinel-1 Level 2 RVL measurements are currently coloured by the Doppler frequency (DC) derived from AOCS and the antenna DC bias (electronic mispointing). This prevents the current version of the Level 2 processor to provide calibrated RVL estimates. We will present here the status of performances achieved from Sentinel-1 measurements in the OCN products and the foreseen way forward. The overall status is that the calibration of RVL measurement suffers for insufficient spacecraft attitude knowledge now of the generation of the OCN products. Analysis of ways to collect the attitude information with proper accuracy are ongoing. Some experiments were performed to achieve a better calibration by inter alia: - Compensating some of the Doppler jumps observed during the activation of thermal compensation in the instrument, - Compensating variations of DC along orbit, - Ensuring continuity of DC measurement along the data takes using land areas as reference points.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Generation of accurate de-noising vectors for S-1 data: 10 years of activities

Authors: Andrea Recchia, Beatrice Mai, Laura Fioretti, Kersten Schmidt, Guillaume Hajduch, Pauline Vincent, Muriel Pinheiro, Antonio Valentino
Affiliations: Aresys, DLR, CLS, ESA, Starion for ESA
The Sentinel-1 first generation is composed of four satellite units developed in two batches. The first two units (S-1A and S-1B) have been launched two years apart in 2014 and 2016. The third and furth units (S-1A and S-1B) will be launched between 2024 and 2025. Sentinel-1 is characterized by large-scale and repetitive observations, systematic production, and free and open data policy i.e., the mission is designed to acquire data globally, to systematically process and deliver products with a timeliness compliant with operational use. The S-1A and S-1B constellation has performed nominally up to the failure of the S-1B spacecraft in December 2021. The Sentinel-1 satellites carry on-board an advanced C-band Synthetic Aperture Radar instrument, providing fast scanning in elevation and in azimuth to enable the implementation of the TOPSAR acquisition mode. The raw data acquired by the SAR instrument are packetized on-board into the so-called Instrument Source Packets (ISPs), downlinked on-ground through the dedicated ground stations and finally included in the SAFE Level-0 products that are available to the users. The Level-0 products are then ingested by the S-1 Instrument Processing Facility (IPF), the sub-system of the S-1 Payload Data Ground Segment (PDGS) responsible for the generation of the Level-1 and Level-2 products. The SAFE L1 products, freely available to all users, provide high resolution RADAR images of the Earth’s surface for land and ocean services. The high coherence of the data is exploited for interferometric applications whereas other applications only rely upon the image intensity. The latter, also exploiting the availability of polarimetric data (e.g., change detection), are gaining importance and reaching similar level of performance than those based on optical images. The applications exploiting the data intensity to retrieve geophysical parameters such soil moisture or wind speed over ocean, require calibrated SAR images. Furthermore, for scenes with low backscatter, the instrument thermal noise level shall be properly removed to get unbiased measures. The S-1 IPF does not perform operationally the noise subtraction but provides, in the product annotations, the relevant information needed for the operation. The noise information is retrieved from dedicated pulses in the S-1 acquisition timeline. The S-1 noise characterization and calibration is one of the many tasks of the SAR Mission Performance Cluster (SAR MPC), an international consortium of SAR experts in charge of the continuous monitoring of the S-1 instruments status and of the L1 and L2 products quality. The MPC is responsible of detecting any potential issue and of implementing the necessary actions (e.g., processor configuration files update) to ensure that no data quality degradation occurs for the users [3]. One of the long-term activities of the SAR MPC has been the improvement of the quality of the de-noising vectors annotated in the products to further improve the data quality. The present contributions will provide a resume of all the noise related activities that have been performed since launch of S1A: • Several improvements have been introduced in the processing chain including: the introduction of 2D vectors to capture the azimuth variation of the noise level in TopSAR data, the proper normalization of the noise vectors for the different processing levels (SLC and GRD at different resolution) and the introduction of noise pulses filtering in case of RFI contamination. • The usage of TopSAR rank echoes has been introduced in 2018 to capture the observed dependency of noise power on the imaged scene. The noise power is about 1 dB higher over land w.r.t. ocean due to the larger Earth Brightness Temperature. The operational usage of rank echoes in the noise vectors generation allows to better track the noise variations within long data takes. • Several calibration campaigns over data with very low backscatter have been performed in time to ensure that the generated noise vectors are correctly aligned with the data. The above-mentioned activities have led in the last 10 years to several changes in the noise vectors annotated in the products. In order to make possible for the users generating high-quality noise vectors for past data, two new tools are currently under development: • A tool to generate the engineering products (L0N) that are needed to exploit rank echoes for the de-noising of products acquired before 2018 has already been developed. • A tool to generate accurate de-noising vectors starting from L1 products and exploiting the updated algorithms and the latest calibration data is currently under development. These tools will be made available to the public to support ad hoc generation of noise vectors for archive products. Acknowledgements: The SAR Mission Performance Cluster (MPC) Service is financed by the European Union, through the Copernicus Programme implemented by ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Development of an InSAR Phase Bias Correction Processor

Authors: Yasser Maghsoudi, Professor Andrew Hooper, Professor Tim Wright, Dr. Milan Lazekcy, Dr. Muriel Pinheiro
Affiliations: Department of Earth and Environmental Sciences, University of Exeter, Penryn Campus,, COMET, School of Earth and Environment, University of Leeds, European Space Agency (ESA)
Phase bias in interferometric synthetic aperture radar (InSAR) can significantly impact the accuracy of ground displacement measurements, particularly in areas with dense vegetation or temporal decorrelation. Addressing this challenge, we developed and consolidated a correction strategy through a project funded by the European Space Agency (ESA), aiming to create a universally applicable phase bias mitigation strategy. This processor estimates bias terms using short-term wrapped interferograms with calibration factors estimated using the long interferograms, and applies these terms to correct interferograms over various acquisition patterns, including 6-day and 12-day intervals. The proposed algorithm incorporates temporal smoothing constraints to handle gaps and missing interferograms, ensuring robust performance across diverse datasets. We applied the method to three study areas: the Azores (Portugal), Campi Flegrei (Italy), and Tien Shan (China). Results show that phase bias effects are significantly reduced, with corrected velocities closely aligning with benchmark estimates derived from the eigendecomposition-based maximum-likelihood estimator (EMI) phase linking method. In the Azores and Campi Flegrei regions, characterized by dense vegetation and shorter acquisition intervals, our approach effectively mitigated apparent artifacts, such as false subsidence and uplift patterns, caused by phase bias. In Tien Shan, where a 12-day acquisition pattern is used, minimal correction was required due to reduced vegetation density and lower susceptibility to phase bias. We further evaluated algorithm’s robustness through an analysis of calibration parameters, demonstrating that slight variations in these parameters do not significantly affect the corrected velocities. We also explored the selection of long-term interferograms to ensure minimal bias during parameter estimation, finding that interferograms with a temporal baseline exceeding 250 days provide reliable zero-bias references. This ESA-funded work represents a significant advancement in InSAR phase bias correction, offering a robust framework applicable to various terrains and acquisition patterns. The results hold relevance for advancing scientific understanding, improving applications in geophysical monitoring, and supporting policy and decision-making processes in hazard assessment and land deformation analysis.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F1)

Presentation: Auto-calibrated estimation of radial velocity for the Sentinel-1 TOPS mode

Authors: Geir Engen, Dr. Yngvar Larsen
Affiliations: Norce
The radial velocity (RVL) component of the Sentinel-1 OCN product is derived from a precise estimate of the doppler centroid frequency. To achieve the required precision, it is currently necessary to use an internal SLC product processed to full azimuth bandwidth, using a uniform azimuth window. This processing step is a computational bottleneck. Furthermore, for the IW and EW imaging modes, it is challenging to fully avoid azimuth filtering due to the spectral mosaicking procedure used in the TOPS mode focusing algorithm. In this work, we present a method for estimation of the RVL from an SLC with less stringent requirements on the processed azimuth bandwidth. The new algorithm requires that a sufficiently wide ideal bandpass or lowpass filter is applied, and that the extent of the ideal part of the filter is annotated with high precision. A narrower azimuth spectrum utilization has several advantages. First, for the TOPS mode, the area of the burst overlaps in azimuth direction can be significantly increased. When the overlaps contain sufficient data to provide reliable stand-alone doppler centroid estimates, we may derive two independent estimates in each burst overlap. This may be exploited for data driven auto-calibration of the doppler centroid estimates since it can be assumed that the geophysical doppler does not change in the ~3 seconds between two consecutive bursts in the same swath. Furthermore, the negative impact of azimuth aliasing is reduced, simplifying the sideband correction procedure. However, if the bandwidth becomes too narrow, the precision of the doppler centroid estimator precision is impacted. In this contribution, we explore the tradeoff between utilized azimuth bandwidth and the statistical performance of the RVL estimation. In addition, we present the proposed auto-calibration approach using doppler estimates from the burst overlap zones. A long datatake containing land in both beginning and end will be used to evaluate the effectiveness of this approach. Finally, we demonstrate the advantages of a workflow starting directly from Level-0 data, optimizing the focusing algorithm specifically for the RVL estimation. This approach eliminates the need for the internal SLC product, leading to a significantly improved throughput. In addition, we show that after specific tuning of the focusing algorithm, the artificial discontinuity between consecutive doppler estimates in azimuth direction observed in some Sentinel-1 RVL products are no longer present.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Session: B.01.02 Earth Observation accelerating Impact in International Development Assistance and Finance - PART 1

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: From Earth Observation Insights to Impact: GDA Water Resources for International Development Assistance

Authors: Eva Haas, Fabian von Trentini, PhD Beatriz Revilla-Romero, Èlia Cantoni i Gómez, Alexander Kreisel, Kerstin Stelzer, Jorrit Scholze, Tomas Soukup, Tomas Bartalos, Fränz Zeimetz, Seifeddine Jomaa
Affiliations: EOMAP GmbH & Co. KG, GMV, Gisat, GeoVille, Brockmann Consult GmbH, Gruner Stucky, Helmholtz Centre for Environmental Research (UFZ)
Water is one of the most vital resources for sustaining life on Earth. It is a habitat, can be a provider of energy or recreation, but is also a source of natural hazards. A changing climate and pollution diminish the availability of usable water and thus increases the potential for user conflicts. The successful implementation and monitoring of Integrated Water Resources Management (IWRM) initiatives, disaster risk reduction and good water quality requires access to reliable data and information on water-related issues. There is a growing awareness that EO data has the potential to serve these data needs, especially in the context of International Financing Institutions (IFIs) and Official Developing Assistance (ODA) that are normally operating in regions where policies and management decisions are often based on sparse and inconsistent information. While past programs with IFIs were important to assess client requirements and demonstrate capabilities at different scales, the ESA GDA program's main objective is to build on existing state-of-the-art services and transform them into meaningful pre-operational prototype solutions. As a result of the GDA Water Resources project, a targeted operational information basis was provided, enabling scale-up to support operations and analytics across the IFIs’ work on water resources. The process to get EO-based information developments into actual protocols and processing lines at the IFIs, starting with the WBG and ADB, has been successfully initiated. The following case studies and real-world applications have been delivered through the GDA AID Water Resources consortium: • Botswana: Groundwater resources monitoring and quantification • Lake Victoria: Water quality management • Pakistan: Integrated Water Resources Management • Georgia: Supporting the set-up of a Hydro-Agro Informatic Centre • Timor Leste: Assessment of surface water extent variability and drought impact on agriculture • Zambezi River: Enhance water quality and discharge monitoring • Peru: Sedimentation and discharge monitoring for reservoir lifetime assessments • Cameroon: Sedimentation and discharge monitoring for reservoir lifetime assessments • Mexico: Drought analyses and sustainable water management • Uzbekistan: Digital monitoring of a reservoir’s storage and water quality This presentation will highlight tangible impacts on end-users in selected case studies and show how the successful uptake of EO products and services by the IFIs and national actors opens new markets and operational roll-out opportunities.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Earth Observation for Proactive Desert Locust Management in East Africa

Authors: Koen De Vos, Roel Van Hoolst, Hadgu Teferi, Koen Van Rossum, Kasper Bonte, Laurent Tits
Affiliations: VITO
The desert locust remains one of the world’s most destructive agricultural pests, with swarms being capable of devastating cropland and natural vegetation over vast areas – heavily impacting local livelihoods and food security. Large locust outbreaks between 2019-2022 in Eastern Africa highlighted the need for innovative and transboundary strategies in pest monitoring and control, particularly as climate change is expected to increase the frequency and intensity of such events. ESA has a Global Development Assistance (GDA) programme that focuses on targeted Agile EO Information Development (GDA AID). To mitigate the recurring threat from locust invasions, GDA AID enabled the co-development of two EO-based services with the Intergovernmental Authority on Development (IGAD) in Eastern Africa- targeted at two distinct stages in the life cycle of the desert locust. Through combining in-situ data from FAO’s Desert Locust Hub with Sentinel-2 and Metop ASCAT satellite data into a MaxEnt model, we were able to identify environmental conditions that are linked to the presence of hoppers- the ground bound stage of the desert locust. Specific soil moisture, soil texture, and air temperature conditions were identified as important indicators because of their relevance in the locusts’ life cycle. The presence of a minimal vegetation cover was found important for food needed for the hoppers to develop. Using the OpenEO platform, we produced hopper habitat suitability maps at 1km resolution for the IGAD countries, Egypt, and the Arabian Peninsula – at regular time intervals of 10 days. This innovative approach allows for flexible adaptation to localized conditions and was co-created in agile development cycles with local stakeholders to ensure operational relevance. This near real-time (NRT) service is prepared for integration into IGAD’s East Africa Hazards Watch (EAHW) platform, thereby providing actionable insights that can empower regional governments and transboundary institutes to better allocate locust control measures. Simultaneously, we developed a tool that can assess damage to crops caused by locust swarms. Time series analysis of Sentinel-2 NDVI were combined with meteorological information and available locust swarm sightings to distinguish impacts from locust from other harmful events (e.g., agricultural drought). By combining this tool with dedicated crop type information in a user-friendly platform, decision makers could further detail the impact on crop production and food security and set up mitigation measures for future planning.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Combined use of EO, OSINT/SOSINT to develop new application products generating indicators of crisis and early triggers in fragile countries

Authors: Luisa Bettili, Annalaura Di Federico, Adriano Benedetti Michelangeli, Federica Pieralice, Nicola Pieroni, Chiara Francalanci, Annekatrin Metz-Marconcini, Alix Leboulanger, Anne-Lynn Dudenhoefer, Gerhard Backfried, Roel Van Hoolst, Alessandro Marin
Affiliations: e-GEOS, Cherrydata, DLR, Janes, Hensoldt-Analytics, Vito, CGI
It is recognized that fragility and conflict-affected countries are facing several bottlenecks into meeting the Strategic Development Goal targets related to unmet basic needs. Fragility, Conflict and Violence indeed already threaten to reverse development gains, and the situation would become even worst considering that the share of the extreme poor living in conflict-affected situations is expected to rise above 50% by 2030. In this context, a key role is played by the entities operating in the International Development Assistance. supporting countries affected by conflict and fragility by providing the financing tool and knowledge needed to rebuild resilient institutions and economies, and remaining engaged during active conflict, recovery and transition. In this context, ESA launched the GDA Fragility, Conflict, and Security project under the ESA’s Global Development Assistance Agile EO Information Development (AID) Programme. The initiative, aimed at integrating Earth Observation data into IFI (International Financial Institutions) operations in fragile settings, by developing new application products coupling EO and SOSINT to generate indicators of crisis and early triggers, to allow for context analysis and situational awareness in a variety of fragility, conflict and security related scenarios in developing countries. Altogether EO combined with OSINT/SOSINT can provide a further contribution to building a complete information framework that allows a more organic, reliable and definitively enhanced scenario analysis, under the assumption that coupling the OSINT/SOSINT techniques with EO observation would result in enhanced information by validating the respective outcomes, filtering the results and confirm the initial hypothesis against tangible complementary observation. Several use cases were fruitfully developed and co-designed with the IFIs, to tackle World Bank (WB), Asian Development Bank (ADB) and the International Fund for the Agricultural Development (IFAD) needs. In the Food Security domain, a use case was developed with World Bank to support emergency response locust program in the eastern African regions, to identify egg breeding and habitat monitoring by using soil moisture and crop damage assessment maps. As far as Situational Awareness is concerned, 3 use cases were developed. In Cameroon for the World Bank, information on major security events obtained by OSINT fed periodic Road Security Assessment Briefings. In Ukraine, Land Grabbing issues were analysed by a multidisciplinary approach that made extensive use of all available information deriving from EO and Social Media data, to monitor land tenure changes and ongoing conflict, identifying EO/OSINT indicators of land grabbing, expropriation, transactions and forced land abandonment phenomena. Finally, a scenario was designed with the Asian Development Bank at the border between Tajikistan and Afghanistan, combining HR and VHR data to provide information at different scale for checkpoint monitoring and migration flow on a monthly basis, to consistently allocate Tajikistan financial resources. In the Asset, Population and Exposure domain, 3 use cases were developed. In Pakistan, a decision support system was designed, integrating Very High Resolution data and analytics to support a Small Scale Infrastructure reconstruction project funded by the World Bank. For the Cox’s Bazar Analytical Program managed by World Bank, a scenario was developed to estimate the effect of displaced population on local economy. The analysis was focussed on roads around the camps as well as multitemporal observation of changes in the settlement extent, as roads and built up areas might be considered proxy of the positive economic impact. A use case was developed with IFAD, the UN International Fund for Agricultural Development, in Colombia, a country suffering events and drug trading. For this use case, land use and land cover analysis and temporal changes, together with context analysis indicators, allowed to support the users in the assessment of the results of their policies related to identification of coca crops, assessment of yearly trends. IFAD was also supporting a livestock migration monitoring in Sudan, to monitor the dynamics/impacts of conflicts between herders and farmers through EO-based analysis, assessing how the livestock routes interfere with the agricultural areas. Leveraging / exploiting the SAR coherence from Sentinel to detect terrain disruption by livestock and combing the outcomes with optical based analysis, provided a more comprehensive picture of the dynamics in terms of animal tracks typical features such as texture and width, allowing to clearly recognize them from other kind of tracks. To conclude, the integration of EO/OSINT data has proven very fruitful in some cases and challenging in others. Specifically, if the phenomenon to be investigated is small size, VHR data are needed to adequately monitor and analyse both from the OSINT and from the EO side. The integration of these two sources offers undoubted added value to the decision maker and is a very powerful tool, if applied to appropriate use cases and scenarios. Application of this multi-source integration on scenarios related to wars, conflicts, terrorism, insurgency, and crisis areas, were proposed by the OSINT perspective. Likewise, satellite capabilities unlocked potential to investigate complex and relevant phenomena, where there is broader space and room of manoeuvre for data collection. Scenarios such as the war in Ukraine, the current conflict between Israel and Gaza, the crisis in Nagorno-Karabakh or the clashes in Sudan, appear in this sense as potential successful Use Cases. Besides the IFIs, there is a potential to extend the benefits from the products generated in the context of ESA-GDA-Fragility, Conflict and Security project to other entities, such as UN organizations, local and FCS related research entities, NGOs and local experts for training and capabilities development on GIS and the overall benefit produced by the EO/OSINT integrated products.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: GDA-AID Marine Environment and Blue Economy: Advancing Sustainable Coastal and Marine Ecosystem Management in Cambodia, Indonesia, and Tunisia through EO

Authors: Dr. Giulio Ceriola, PhD Antonello Aiello, PhD Gamal Abdelnasser Allam Abouzied, PhD Tania Casal, Daniela Drimaco
Affiliations: Planetek Italia s.r.l., ESA
Coastal ecosystems are vital for maintaining biodiversity, protecting shorelines from erosion, supporting fisheries, and serving as natural buffers against storms and rising sea levels. The use cases presented here demonstrate the application of EO technologies to monitor and manage coastal and marine ecosystems in Cambodia, Indonesia, and Tunisia under the Global Development Assistance Agile EO Information Development (GDA-AID) program. Leveraging high-resolution datasets from the European Union’s Copernicus program, including Sentinel-1, Sentinel-2, and Sentinel-3 satellites, the EO-based services provide insights into ecosystem health, water quality, environmental drivers, and the impacts of anthropogenic pressures. The findings aim to support sustainable development, biodiversity conservation, and climate mitigation through actionable geospatial analysis. In Cambodia, the study supported the “Cambodia: Sustainable Coastal and Marine Fisheries Project” funded by the Asian Development Bank, which has the objective of identifying and promoting sustainable practices for fishery and aquaculture concerning the coastal environment. The activity focused on evaluating mangrove ecosystems across Kampot, Kep, Koh Kong, and Preah Sihanouk provinces. Mangrove ecosystems were analyzed using Sentinel-1 and Sentinel-2 imagery, with mapping conducted for the years 2017 and 2021. The analysis revealed significant mangrove loss, primarily attributed to anthropogenic pressures such as shrimp farming, charcoal production, urban expansion, and salt pan construction. Restoration efforts were constrained by inappropriate site selection and adverse environmental conditions. Key restoration challenges included high mortality rates in replanted mangroves and the unsuitability of target areas. The integration of EO data into future restoration planning is recommended to identify optimal restoration zones. EO data were used to analyze key parameters such as salinity, sea surface temperature, chlorophyll concentration, turbidity, and wave height from 2016 to 2022. While these parameters displayed spatial and temporal variability, no consistent correlation with mangrove changes was identified. It highlights the complexity of ecosystem dynamics and the need for more detailed investigations combining EO data with socio-economic and biological factors. In Indonesia, the study supported the “Infrastructure Improvement for Shrimp Aquaculture” initiative led by the Asian Development Bank (ADB) and the Ministry of Marine Affairs and Fisheries (MMAF). The project focused on shrimp farming in Lampung and Banten regions, integrating EO technologies to assess water quality, salinity, and the environmental impacts on shrimp ponds and adjacent coastal areas. Sentinel-2 imagery was used to generate turbidity maps at a spatial resolution of 10 meters. Forty-seven turbidity maps were derived for Lampung in 2021–2022, categorizing water quality into four turbidity ranges. Shrimp ponds with high turbidity were identified as potentially having poor water quality, which could negatively impact shrimp production. Copernicus Marine Environment Monitoring Service (CMEMS) data allowed us to evaluate sea salinity and sea level variations. The study revealed significant spatial and temporal variability in sea salinity and sea level, both of which emerged as critical factors influencing shrimp pond conditions. A potential impact index was developed to assess the salinity levels of shrimp ponds based on sea level and salinity changes. Concerning the impact of shrimp farming on coastal areas, by exploiting Copernicus Sentinel-3 data, seasonal eutrophication was detected in some coastal areas of Lampung due to riverine nutrient inflow. Persistent nutrient enrichment in Jakarta Bay was attributed to urban and agricultural runoff. Key indicators such as chlorophyll concentration, water transparency, and the trophic state index (TSI) were used to identify areas under environmental stress, with Jakarta Bay showing year-round high nutrient levels and Lampung displaying seasonal patterns linked to agricultural cycles. In Tunisia, the Gulf of Gabes served as a use case for evaluating EO technologies in the estimation of blue carbon. The region, known for its seagrass meadows and phytoplankton populations, was analyzed to assess ecosystem health and carbon sequestration potential. High-resolution Sentinel-2 imagery and historical data revealed a 5% decline in seagrass coverage between 2017 and 2022, corresponding to a loss of over 165 km². Despite this decline, the remaining seagrass meadows sequester approximately 1.35 million tons of CO₂ annually, emphasizing their critical role in climate mitigation. The loss of seagrass was linked to human activities, including coastal development, overfishing, and pollution, highlighting the need for targeted conservation policies. CMEMS data were used to map phytoplankton populations, which are vital for marine primary productivity and ecosystem health. The EO-based information obtained for the period 2018 and 2022 reveals clear seasonal patterns in NPP and phytoplankton concentration, with distinct temporal and spatial variations. The Gulf of Gabes consistently emerged as the most productive region in both years. At the same time, coastal waters within 12 nautical miles exhibited higher biomass densities than the more extensive EEZ area. Although the 2022 data showed higher NPP and biomass carbon compared to 2018, it is too early to determine whether this represents a long-term trend or a result of short-term environmental variability. Seasonal blooms were observed, driven by nutrient availability and climatic conditions. The findings provide insights into the trophic dynamics of the Gulf of Gabes and the broader implications for fisheries and marine biodiversity. The use case underscores the need for integrating EO data into marine spatial planning to support biodiversity conservation and sustainable resource management. In the Gulf of Gabes, actionable insights from EO analyses can inform policies to mitigate seagrass loss, regulate nutrient loading, and preserve critical habitats. The findings and recommendations have been incorporated into the national government’s blue economy roadmap in Tunisia, developed with World Bank support. Across all regions, the GDA-AID Marine Environment and Blue Economy activity highlighted the transformative potential of EO technologies in ecosystem monitoring, particularly when combined with in-situ data for validation. The findings emphasized the need for tailored conservation strategies, informed by geospatial analyses, to mitigate anthropogenic pressures such as urbanization, agriculture, and unsustainable aquaculture practices. By supporting biodiversity, improving water quality, and enhancing climate resilience, EO technologies provide an invaluable resource for sustainable development in regions under increasing ecological stress. These methodologies are replicable in other coastal areas and can be scaled for global environmental monitoring and policymaking.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: Methodologies and Lessons Learned: Measuring the Impact of Earth Observation in Climate Action and Sustainable Development

Authors: David Taverner
Affiliations: Caribou Space
Introduction The integration of Earth Observation (EO) in international development assistance continues to unlock transformative opportunities for climate action & sustainable development, supported by programmes such as ESA’s Global Development Assistance (GDA). However, measuring and communicating that impact, particularly within multi-stakeholder collaborations involving organisations like ESA, International Financial Institutions (IFIs), and the EO service sector, requires innovative impact measurement approaches. This presentation highlights specific methodologies, best practices, and lessons learned from the GDA programme, including data synthesis, stakeholder alignment, and impact communication, offering actionable insights for EO programmes focusing on end-user adoption and long-term climate action & sustainable development. Methodologies, Best Practices, and Lessons Learned The presentation will highlight the following generalisable learnings that have most benefit to a wide range of LPS audiences, including: 1. Evidence Landscape Mapping: Tools like impact literature reviews and evidence maps synthesise the existing knowledge base, showcasing where and how EO contributes to climate action & sustainable development. This approach fosters stakeholder alignment around shared objectives and directs efforts toward areas of greatest impact. 2. Theory of Change and Indicator Frameworks: A robust Theory of Change links programme outputs to longer term outcomes, while indicator frameworks provide measurable pathways to assess progress. 3. Data Management and Synthesis: Efficient systems for managing and communicating indicators across consortium members and IFIs, such as the GDA Dashboard, enable end to end data collection and dissemination. 4. Tracking Demand: Monitoring end user EO adoption via procurement data (e.g., World Bank and Asian Development Bank databases, EARSC surveys) provides actionable insights into the level of mainstreaming of EO technologies over time. 5. Stakeholder Collaboration: Aligning indicators with external organisations, such as the World Bank, translates technical and scientific outcomes into tangible benefits for global sustainable development efforts. 6. Public Evaluations: Published evaluations, such as the GDA Midterm Evaluation, provide transparency and wider access to lessons beyond those directly involved. Highlights include that to date; 167 Earth Observation Information Developments (EOIDs) have supported 83 IFI projects in 69 countries. 7. Communication of Impact: Bite-sized, interactive formats such as dynamic/online evaluations, dashboards and case studies enhance the accessibility of EO impacts, helping non-experts, including policymakers and governments, engage with actionable insights. Latest GDA Programmatic Results If desired by the LPS and GDA team there can be inclusion of the latest results and findings from the newly drafted GDA Evaluation (planned for publication winter/spring 2025) including updated information regarding EO usage and IFI financing alignment. Degree of Innovation This presentation introduces a distinct perspective within the LPS community, focusing on methodologies for measuring the impact of EO usage rather than the development of EO technologies themselves. By addressing the operationalisation of EO in large-scale, multi-stakeholder programmes, it advances the understanding of how EO technologies translate into tangible, climate action & sustainable development outcomes. The innovation lies in its emphasis on cross-sectoral collaboration, practical impact measurement, and scalable methodologies offering new insights and evidence into a sector and technology that is ripe for mainstreaming. Technical Correctness and Validation Outcome All methodologies and results discussed in this presentation have undergone rigorous validation through practical application and multi-level review processes. These include: ● Quality assurance of input data from EO service providers. ● Review and critique by ESA team members, including Technical Officers. ● External validation by IFIs, ensuring alignment with institutional requirements and development objectives. Caribou Space’s decade-long expertise in impact measurement in the EO sector, underscores the reliability and technical robustness of this impact assessment work. Relevance of Results For the EO Sector: Increased understanding of the importance of - and means by which it is possible to - measure and communicate the impact of their EO technological developments - to increase end user adoption and ultimately procurement. For ESA: Understanding impact measurement methodologies, best practices, and lessons learned from GDA as an end user focused EO programme. Partnerships with non-EO Organisations: Highlighting principles for impact measurement in multi-stakeholder partnership and cooperation between ESA/EO Service sector, IFIs and other development agencies. Conclusion The European, and global, EO industry has historically and traditionally been focused on technological R&D and communication of its impact from a scientific perspective. However, as the industry, and ESA with it, evolves from a focus on technological R&D to end-user uptake and usage, measurement and communication of end-user impact, particularly within climate action & sustainable development, is a key pillar to success.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.85/1.86)

Presentation: The Geospatial Planning and Budgeting Platform (GPBP) use case within the GDA Climate Resilience ESA Project

Authors: Stefano Natali, Kai Kaiser, Ramiro Marco Figuera, Robert James Johnsen, Parvathy Krishnan Krishnakumari, Melih Uraz, Imanol Uriarte Latorre
Affiliations: SISTEMA GmbH, The World Bank
The Geospatial Planning and Budgeting Platform (GPBP, https://gpbp.adamplatform.eu/) represents a significant milestone in climate resilience efforts under the Global Development Assistance (GDA) Climate Resilience project, developed in collaboration with the European Space Agency (ESA) and the World Bank. This innovative platform addresses the pressing need for decision-support tools capable of screening climate conditions and their potential impacts on critical infrastructure across various sectors. The GPBP consists of two main modules. The Data-as-a-Service (DaaS) module integrates historical climate data from ERA5-Land and future projections from CMIP6 into country-specific Country DataCubes. These datacubes enable efficient analysis and data access tailored to national contexts. The Platform-as-a-Service (PaaS) module provides a processing API for climate change screening that can be accessed via web app and jupyter notebooks. This API enables users to assess potential disruptions to infrastructure by analyzing asset types, geographic footprints, thresholds, and climate variables such as wind speed, precipitation, and temperature. The API is accessible via multiple interfaces: a user-friendly web application with graphical representations of results and a Jupyter notebook integration for advanced analytics. Together, these features empower stakeholders to evaluate climate risks efficiently and adapt decision-making processes accordingly. In its current version (v0.4.0), GPBP allows users to conduct a complete climate change screening workflow over 24 countries worldwide, from asset information input to the export of the assessment results. The platform has been successfully demonstrated in various presentations and dissemination activities, showcasing its potential across sectors such as agriculture, insurance, and urban planning. Starting in January 2025, the platform will undergo a three-year development phase to enhance its functionality. Planned upgrades include modularization, enabling third-party entities to integrate individual services into the platforms, and the introduction of new features to scale up to wider domains. The GPBP exemplifies the growing demand for data-driven tools to address the challenges of climate change. Its adaptable framework offers a robust foundation for supporting decision-making in diverse scenarios, paving the way for enhanced climate resilience worldwide.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Session: A.06.02 Enhancing Space Weather Understanding: Insights from LEO Satellite-Based Operational and Pre-Operational Products

Space weather and space climate refer to the interactions between the Sun and Earth over timescales ranging from minutes to decades. Predicting extreme space weather and developing mitigation strategies is crucial, as space assets and critical infrastructures, including satellites, communication systems, power grids, aviation, etc., are vulnerable to the space environment.

This session focuses on assessing the current status of the space weather forecast and nowcast products obtained from LEO satellite measurements, alongside other missions and ground-based technologies, and pushing forward with innovative concepts. We strongly encourage contributions that promote a cross-disciplinary and collaborative approach to advancing our understanding of space weather and space climate. Moreover, we welcome presentations that investigate the effects of space weather on diverse applications in Earth's environment, such as space exploration, aviation, power grids, auroral tourism, etc.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Use of the SWARM ionospheric gradient product to model scintillation at high latitudes

Authors: Dmytro Vasylyev, Martin Kriegel, Mainul Hoque, Andres Cahuasqu, Jens Berdermann
Affiliations: German Aerospace Center Dlr,
The Global Ionospheric Scintillation Model (GISM) is planned to operate as an ionospheric scintillation modelling and prediction tool as part of the Ionospheric Monitoring and Prediction Center (IMPC) services at DLR. Currently, the GSM model is only able to cover the low-latitude regions and our efforts are focused on extending the model to high and polar latitudes before it can be used in operational service. For this purpose, it is planned to use the recently developed method of phase gradient screens, which allows to simulate the refractive type of scintillation caused by scattering on strong ionospheric gradients [1]. The climatology of the required gradient field will be derived from the in-situ electron density measurements on board the Swarm satellites, which cover a period of 10 years of data collection. In this context, the method of empirical orthogonal functions has been used to relate the gradient values to the relevant driving parameters such as solar flux index, solar wind coupling parameter, geomagnetic field strength, etc. We present some recent results on high latitude scintillation modelling and validation studies. [1] D. Vasylyev et al., “Scintillation modeling with random phase gradient screens”, 14, 29 JSWSC (2024).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Use of satellite observations to study the effects of the large geomagnetic storms of 2024

Authors: Eelco Doornbos
Affiliations: KNMI, Royal Netherlands Meteorological Institute
A combined visualisation of various satellite constellation observations sheds light on the effects of large geomagnetic storms and the extent of the aurora. We have focused on the May 10/11 and October 10/11 storms of 2024. In particular, we have looked at thermosphere-ionosphere observations and field-aligned currents measured by the three Swarm satellites, ionospheric ultraviolet emissions from NASA's GOLD as well as the SSUSI instruments on two DMSP satellites, and visible emissions by the aurora from the Day-Night-Band on the VIIRS instruments carried by three JPSS satellites. Combined with ground magnetometer and GNSS receiver total electron content data, this provides a large variety of perspectives on the storm-time dynamics. Geomagnetic storms are space weather events that result from the interactions between the solar wind and Earth's magnetosphere. They are characterised by global perturbations in measurements of Earth's magnetic field, resulting from large current systems in the magnetosphere, which via field-aligned currents in the auroral regions reach into the upper atmosphere and connect via ionospheric current systems. The field-aligned currents bring energetic protons and electrons into the thermosphere-ionosphere, where they cause the aurora. But the storm-time current systems have other effects as well. They result in localised high-latitude heating of the thermosphere, exceeding the day-side heating due to solar EUV irradiation, and thereby completely altering the global thermosphere dynamics. The energy is globally redistributed from auroral latitudes to other latitudes via large-scale waves, resulting in a global expansion of the neutral upper atmosphere and greatly increased LEO satellite drag at fixed altitudes. For the Swarm satellites, the May and October storms resulted in the largest measured peak drag accelerations since the start of its more than 10 year mission, a factor of 2 larger than during previous storms. Further effects of storms on the thermosphere-ionosphere also include complex dynamics, resulting in regions of enhanced and depleted electron densities and large electron density gradients, affecting radio signal propagation, including those used by Global Navigation Satellite Systems (GNSS) services, and reducing the reliability and availability of satellite navigation augmentation systems. Finally, the magnetic field fluctuations can create geomagnetically induced currents, which in extreme cases has been known to cause stability issues or even permanent damage to high voltage power grid infrastructure. Because of their rare occurrence, very large geomagnetic storms have been difficult to study. For example, based on ground magnetometer observations, the geomagnetic storms of 1859 and 1921 were most likely much stronger than any so far observed during the space age. Anecdotal evidence of auroral sightings from very low latitude locations during these and even older events are part of the traditional narrative warning of possible space weather impacts in highly populated lower latitude locations, should such an extreme storm reoccur in our modern technology-driven society. During the strong, but not extreme May and October storms, there were also many eyewitness accounts as well as photographic evidence, of aurora visible from mid to low latitudes. The recent satellite observations of the 2024 storms by VIIRS, SSUSI, GOLD and Swarm help to put these observations into context, as they prove that strong auroral emissions occurred overhead down to at least +/-45 degrees quasi-dipole magnetic latitude as well as up to 1300 km in altitude. It seems that extreme heights of auroral emissions during larger storms can play a significant role in lower latitude visibility.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: ESA's Distributed Space Weather System - Missions and Data

Authors: Melanie Heil
Affiliations: European Space Agency
ESA's Space Safety Programme aims at protecting space and ground assets against adverse effects from space. The Space Weather Segment is focussing on such effects due to the activity of our Sun. Monitoring of the Earth's and Sun's environment is an essential task for the now- and forecasting of Space Weather and the modelling of interactions between the Sun and the Earth. Due to the asymmetry and complexity of Earth's magnetosphere, the involved particle environment and its dynamics, it is necessary to capture the state of the magnetic field and the particle distribution in a sufficiently large number of sampling points around the Earth, such that it allows state-monitoring and modelling of the involved processes with sufficient accuracy and timeliness. ESA is implementing a space weather monitoring system, including the establishment of a Distributed Space Weather Sensor System (D3S) to observe the effects of solar activity within Earth's vicinity. D3S is a system of systems with a variety of mission types. Space Weather instrumentation for in-situ measurements is typically rather compact and of low resource need. This characteristic makes it easy to accommodate them on spacecrafts as a secondary payload. Hosted payload missions, in which an ESA provided space weather instrument is flying on a mission managed outside of the Space Safety Programme, are a cost-effective way to address individual measurement requirements. Currently, hosted payload missions are implemented on GeokompSat-2A, EDRS-C, Sentinel-6, Hotbird 13F&G as well as MTG-I1, with the collaboration with EUMETSAT to extend to all MTG and Metop-SG satellites for radiation monitoring. Hosted payload missions need to be complemented by dedicated space weather missions to achieve coverage of the D3S measurement requirements. In particular, the wide span of observations to be performed in LEO and the data timeliness requirement driving the mission architecture in this orbit make dedicated missions the optimal solution. These missions could be performed on platforms spanning from nano- to micro-satellites with mass up to 200 kg. Current missions in implementation are Aurora, to provide continuous monitoring of both Auroral ovals, and SWING, ESA's first Space Weather Nanosatellite to provide data not he ionosphere. A second nano satellite mission is in preparation as well as a GTO mission to provide nowcasts of the radiation belts, called SWORD. The current configuration and planned implementation of D3S using Hosted payloads, SmallSats and NanoSats will be presented.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Comparing Thermospheric Density Variations from GRACE-FO and Swarm Missions During Low and High Solar Activity

Authors: Myrto Tzamali, Alexi Glover, Juha-Pekka Luntama
Affiliations: ESOC-ESA
In situ thermospheric densities from two key LEO missions, GRACE-FO and Swarm, which operate in near-polar orbits at similar altitudes, are provided by Delft University. GRACE-FO densities are derived from high-precision accelerometer measurements, while Swarm densities are calculated using GPS observations. This study analyses the densities provided by both missions during their overlapping operational period from 2018 to 2024, covering both low and high solar activity periods. This temporal coverage enables a comparison of residuals under different levels of solar activity. The analysis focuses on variations beyond the dominant orbital and diurnal periodicities. These primary periodicities are removed using Least Squares and Weighted Least Squares methods, facilitating the isolation of short-term variations. After the removal of these dominant periodicities, the residuals reveal patterns associated with equatorial disturbances, terminator effects, pole crossings, and geomagnetic storms. Equatorial signals are observed in the residuals of both missions, either in ascending or descending orbits; however, these signals are not consistent between the two missions. The dependency of residuals on local time is evaluated to investigate day-night variations and their impact on density perturbations. A comparison between POD- and accelerometer-derived densities is conducted, with particular focus on their ability to capture high-frequency density variations during geomagnetic storms. Results indicate that GRACE-FO densities detect disturbances even at Kp = 3 in mid- and high-latitude regions, whereas Swarm densities exhibit weaker responses under similar conditions. During moderate geomagnetic storms, density residuals for both missions can increase by up to three orders of magnitude due to significant disturbances in the thermosphere. A correlation analysis between geomagnetic indices, such as Hp30 and Kp, and the residual densities highlights the importance of high-cadence geomagnetic indices for accurately capturing short-term density fluctuations. The analysis is repeated with and without incorporating error information in the density measurements, and the residuals are compared with the standardized residuals to demonstrate the critical role of realistic uncertainties in improving the reliability of thermospheric density datasets.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: How the ESA Swarm mission can contribute to Space Weather

Authors: Dr Anja Strømme, Enkelejda Qamili, Roberta Forte, Vincenzo Panebianco, Antonio De la Fuente
Affiliations: ESA, Serco for ESA
The Near Earth Space Environment is a complex and interconnected system of systems, and the home of a multitude of physical processes all contributing to space weather and space climate effects, and hence collaboration across traditional boundaries is essential in order to progress in our understanding of and our capability to predict Space Weather. The ESA Swarm Earth Explorer mission, launched 22. September 2013 has completed a full solar cycle in orbit and is in its nature a true system science mission. After more than a decade in Space, Swarm is still in excellent shape and continues to contribute to a wide range of scientific fields, from the core of our planet, via the mantle and the lithosphere, to the ionosphere and interactions with the Solar wind. In 2023 a “fast” processing chain was introduced providing Swarm Level 1B products (orbit, attitude, magnetic field and plasma measurements) with a minimum delay with respect to acquisition. In 2024 also the generation of Swarm Level 2 products (Field Aligned Current, Total Electron Content) have been implemented in the “fast” chain and are available through the Swarm dissemination server. In this presentation we will highlight the contributions the Swarm mission has had and continues to have for the space weather community through constantly evolving products and services, with a specific focus on the “fast” data as these products add significant value in monitoring present Space Weather phenomena and help modelling and nowcasting the evolution of several geomagnetic and ionospheric events.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E2)

Presentation: Total Root Electron Content Obtained From Lightning Generated Whistlers in the Extremely Low Frequencies From Swarm Mission and Future NanoMagSat Opportunities.

Authors: Martin Jenner, Pierdavide Coïsson, Gauthier Hulot, Louis Chauvet, Robin Deborde
Affiliations: Université Paris Cité, Institut de physique du globe de Paris, CNRS
Electromagnetic signals of opportunity propagating through the Earth ionosphere can be used to measure its parameters. Strong lightning can excite whistler signals that are detected by Swarm satellites, during burst-mode (250 Hz sampling) acquisition campaigns of the Absolute Scalar Magnetometer (ASM). These acquisition campaigns have been conducted regularly since 2019, an entire week of continuous burst-mode acquisition being run every month on each of the Alpha and Bravo Swarm satellites. Electromagnetic propagation below the lower hybrid frequency of the plasma and above the gyrofrequency of the dominant positive ion is dispersed and temporal separation of the various frequency components of the lightning whistler signals reaching the satellite can be measured. The corresponding dispersion depends on both the properties of the plasma crossed and the propagation path of individual frequencies. We recently demonstrated [Jenner et al., 2024] that within this frequency range the propagation time is proportional to the integral of the square-root of the plasma electron density, a quantity that we called Total Root Electron Content (TREC). This TREC can be recovered by relying on the measured whistler dispersion and computing their propagation path using numerical ray-tracing propagating through the climatological International Reference Ionosphere (IRI) and a dipolar magnetic field approximation based on the International Geomagnetic Reference Field (IGRF) model. The recovered TREC values have been validated using independent ionosonde data to infer the bottom side ionospheric profile and Swarm in-situ plasma densities from its Langmuir probes to constrain its top side. Swarm whistler detections are currently provided by the WHI Swarm mission Level 2 product, which we used to obtain such Swarm derived TREC estimates. The random occurrence of exploitable whistler detections limit the TREC availability mostly to the low latitudes. But these are regions where the ionospheric dynamic is strong and the availability of ionospheric data from other sources limited, making TREC a valuable parameter for future applications. In this presentation, we will present results obtained so far from the Swarm mission and discuss the additional enhanced opportunities that the ESA Scout NanoMagSat mission will provide by continuously acquiring magnetic scalar and vector components with a 2 kHz sampling from a constellation of three satellites to be launched end of 2027 at 545 km altitude, one on a polar orbit and two on 60° inclined orbits. References Jenner, M., P. Coïsson, G. Hulot, D. Buresova, V. Truhlik, and L. Chauvet (2024), Total Root Electron Content: A new metric for the ionosphere below Low Earth Orbiting satellites, Geophysical Research Letters, 51(15), e2024GL110,559, doi:https://doi.org/10.1029/2024GL110559, e2024GL110559 2024GL110559.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Session: A.08.01 Advances in Swath Altimetry - PART 3

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Geophysical contents of the SWOT Doppler measurements; observables as complementary information to topography.

Authors: Pierre Dubois, Alejandro Bohe, Fabrice Ardhuin
Affiliations: CLS, CNES, LOPS/IFREMER
The Surface Water and Ocean Topography (SWOT) mission data product provides two separate Doppler measurements [Peral et al., 2024]. First, the topography (Sea Surface Heights, SSH) estimation process includes a Doppler centroid estimation (fractional part) using the pulse-pair estimation method [Zrnic, 1977], performed on-board on the raw data. This estimate (two values per swath, every 25km in along-track) is used during the on-board azimuth compression to re-center the Doppler spectrum approximately at 0 Hz. This is meant to maximize SNR which relates to SSH noise minimization. The second Doppler measurement, the mitigation or “high-resolution” Doppler, is also estimated on board using a pulse pair algorithm on the range compressed data (yet, before the azimuth compression on-board algorithm) on a resolution grid of 2x2km. The mitigation Doppler is for the first time processed and exploited to evaluate its intrinsic value in providing information about the sea state and its potential future use in improving the Sea State Bias correction and, as a result, the SSH data product. We modeled the platform and antenna contributions to the Doppler, the so-called Non Geophysical (NG) Doppler, and evaluated it with the use of orbit and attitude reconstruction. The Doppler corrected from this NG contribution represents the Geophysical Doppler. It is the contribution of the ocean surface velocities projected onto the radar direction. Ocean surface velocities include surface currents and a wave-induced velocity , which appears in radar data due to a correlation between local backscattering power and local orbital velocities of the waves [Chapron et al., 2005]. Because of the near-nadir looking geometry, the horizontal surface current has a small projected component in the radar direction; a 1m/s surface current aligned in the radar direction gives a Doppler from 8 to 45 Hz, depending on the cross track distance. The wave-induced velocity contributes to the observed Geophysical Doppler with values typically ranging from 10 to 50Hz in the cross track direction [Nouguier et al., 2018]. For comparison, an error as small as 1 mdeg on the knowledge of the pitch of the platform, or the antenna, already results in an error of about 60Hz in the NG correction. In this presentation, we demonstrate that our NG correction is accurate enough to obtain Geophysical Doppler estimates dominated by geophysical content. To validate our NG correction, we perform statistical comparisons between our Geophysical Doppler estimates and theoretical predictions of the wave-induced Doppler computed from wave model data. At near-nadir radar incidences, the wave-induced Doppler has been shown to be determined by the local wind and Stokes current vectors [Nouguier et al., 2018], which are parameters directly available from the ocean circulation/waves models (ECMWF, WW3, …). The conversion also involves a model for the derivative of the Normalized Radar Cross Section (NRCS) in Ka band with respect to radar incidence and azimuth angles, which we compute from Global Precipitation Measurements (GMP) mission data. Accumulating differences over a large number of SWOT passes allows us to assess the quality of the NG correction. Residual instrumental errors, mainly driven by the change in solar beta angle (the angle at which the sun shines on the spacecraft, determining its thermal behavior) are characterized and a strategy to calibrate them out is presented. The noise in the measurements is also characterized. With this statistical validation in hand, we then discuss the extent to which SWOT’s geophysical velocity estimates allow us to infer properties of the sea state and of surface currents . We find that regions of strong currents may exhibit large enough signatures for the data put useful constraints on current maps . Away from those regions, the velocity estimates may be dominated by the wave-induced contribution, which provides information on the direction of the wind relative to the radar direction, a quantity of geophysical value in itself, and potentially an input for sea state bias correction. There is also theoretical evidence that the height/NRCS correlation that drives the Electromagnetic bias is at near nadir primarily related to the long wave orbital velocity variance [Chapron et al., 2001]. The long wave orbital velocity dependence of the radar observed wave-induced velocities may then provide a complementary estimate of the effect of sea state on SSH .
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Next generation mean sea surface with swath altimetry

Authors: Bjarke Nilsson, Ole Baltazar Andersen, Per Knudsen
Affiliations: DTU Space
The development of mean sea surface references have been continuously developed as more and new nadir altimeters have been launched for the last 30 years. Even with the breakthough of the second generation SAR satellite altimeters, the improvements in quality have been limited by multiple factors. First, that the conventional altimeters are locked to a profile nadir of the satellite, thereby limiting either the spatial or temporal resolution. Secondly, due to the profile sampling, the resolution is not isotropic, and primarily favors the north-south direction, while the east-west resolution is limited. And lastly, the ability to resolve the the sea surface height based on the return power, has limited the vertical resolution of the suite of nadir altimeters. With the launch of the Surface Water and Ocean Topography (SWOT) mission in 2022, all of the abovementioned points have been challenged, with swath altimetry providing wide coverage, low noise level and excellent cross-track resolution. Due to the dual antennas, small-scale 2-dimensional sea surface features are resolvable at a resolution not previously possible. Including these high-resolution observations in the geodetic references shows new features in the ocean surface not peviously resolvable. With the future launch of the next-generation topography Sentinel missions, the benefit of swath altimetry will only increase. Around one and a half year of global SWOT data is currently available. This is much shorter than the 30 years of conventional satellite altimetry used to produce the current MSS models. However, combining the long time-series of the current MSS models in determining the longer spatial scales with the high-resolution SWOT data to determine the finer scales is explored here. We present a new mean sea surface reference, utilising the high resolution data available at the moment. Our analysis shows a noise floor an order of magnitude lower than those derived using purely nadir altimetry, as well as being able to sample closer to the coast than ever before. Using an updated reference with the highest resolution data available will be of critical importance for oceanographic research, geodetic mapping and climate science.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Observations of swells resolved by SWOT’s HR mode and simulations of S3NG-Topo’s wave mode products

Authors: Louise Yu, Dr Alejandro Bohe, Dr Damien Desroches, François Boy
Affiliations: Centre National d'Etudes Spatiales
The study of ocean waves is important for understanding the energy distribution and fluxes within the ocean and with the atmosphere and the coasts. Swath altimetry, pioneered by the SWOT (Surface Waters and Ocean Topography) mission launched in December 2022, offers the possibility to obtain concurrent 2-dimensional measurements of sea surface height and backscatter coefficient. In particular, SWOT's HR (High Rate) mode delivers such products at a 10-to-60-m pixel size in a swath of 120 km, derived from fully-focused SAR (Synthetic Aperture RADAR) interferograms. This study focuses on the observation of swell regimes of wavelengths ~100 m and up, through the HR mode of SWOT. While the HR data is typically acquired by SWOT over hydrological targets and coastal areas but do not globally cover the ocean, several patches of HR data have been acquired over the open ocean, particularly during the Cal/Val phase of the mission. This offers a fantastic opportunity to examine how SWOT resolves swells down to such small wavelengths and to tackle interesting questions regarding wave physics. To this end, we confront SWOT’s data with simulations from the CNES radar simulator Radarspy and with results from analytical approaches. From the SWOT HR products, we compute 2-dimensional spectra of the concurrent measurements of ocean height and backscattered power, as well as cross-spectra between both. These observables contain a wealth of information about the underlying wave spectrum and about wave physics. However, similarly to what happens for traditional SAR modulation spectra, a number of distortions caused by the observing mechanism (in this case, near-nadir interferometric SAR) make the interpretation of these spectra indirect. While these distortions are well understood and documented in the literature for traditional SAR, they remain to be accurately modelled for height spectra derived from interferometric SAR acquisitions. Here, we discuss our effort to qualitatively understand and quantitatively predict these distortions and the main structures that they create in the spectrum, through rigorous analytical derivation and numerical simulations. The work presented here has several applications. First, it provides a model to invert the underlying wave spectrum from SWOT’s HR acquisitions. We show, both analytically and numerically, that spectra of HR data notably showcase harmonics of the original swell spectrum as well as a low-frequency component that results from convolutions of the swell spectrum with itself, and which should not be interpreted as actual wave energy in the underlying spectrum. The effect of the motion of the surface, leading among other effects to an exponential suppression of the wave energy in the along-track direction (usually referred to as the azimuth cutoff effect) is also accounted for in our analysis. Second, SWOT’s HR data presents a unique opportunity to shed some light on the hydrodynamical backscatter modulation phenomenon whereby the backscattered power is higher at the troughs of long-wavelength waves due to a lower local roughness induced by non-linear wave interactions. Indeed, at least for swells of wavelengths longer than say 100m, SWOT simultaneously measures the wave height and backscattered power along the wave profile. Constraining this physical height/backscatter correlation is an important step towards improving corrections of the so-called sea state bias, which affects sea surface height measurements in altimetry. However, some of the distortions due to the imaging mechanism also generate a correlation between the height and backscattered power measured by the instrument. In particular, velocity bunching caused by the surface motion typically induces correlations that completely dominate those from the hydrodynamical modulation and therefore needs to be accurately modelled in order to obtain useful information about the latter. We present our effort to tackle this issue through both an analytical approach and simulations, and a first application case on SWOT observations Finally, this work helps prepare the future mission S3NG-T (Sentinel 3 Next Generation Topography), which will include a wave mode to deliver 2-dimensional wave spectra for observations of HR spatial resolution. In the last part of this talk, we present a few simulations of this wave mode using Radarspy and discuss the similarities and differences, compared to the SWOT spectra, which we can expect to see in the S3NG-T data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Sea State Bias In Wide Swath Altimetry

Authors: Samuel Osina, Frederic Nouguier, Alejandro Bohe, Fabrice Ardhuin
Affiliations: LOPS
Since the launch of SWOT in December 2022, astonishing bi-dimensional maps of highly resolved Sea-Surface Height (SSH) have been revealed showing the complex structure of the ocean and the multi-scaled processes occurring at its surface. The highly resolved data provided by KaRIn provides a unique opportunity to monitor, study, and quantify such structures. One important source of error in altimetry, related to the presence of surface waves, is the so called Sea State Bias (SSB) that induces a negative shift in the measured surface height of the order of several percent of the significant wave height (SWH). The SSB is the signature of various complex physical properties of the ocean waves, among which their non-Gaussian nature and the fact that the local roughness (mean square slope) varies along the wave profile due to non-linear interactions between the waves (hydrodynamical modulation), inducing differences in the electromagnetic signal backscattered to the radar that bias the measurement (electromagnetic bias). In the context of conventional (or SAR) nadir altimetry, empirical corrections (essentially derived by tabulating differences between height measurements at cross-overs as a function of sea state parameters) are used to reduce the error due to SSB. The measurement principle used by KaRIn (interferometry) is significantly different from that of nadir altimetry and as a result, the SSB created by waves could be different. Studies before launch using somewhat simplified wave physics (gaussian waves and a simple model for the backscatter modulation) concluded that the SSB would be similar enough to the one affecting nadir altimetry to allow the usage, as the initial correction (used in the operational processing since launch), of the empirical table derived from the AltiKa mission (also in Ka band). In this work, we revisit this issue using refined wave physics. Since KaRIn’s processing uses a SAR compression to achieve its azimuth resolution, we pay particular attention to the impact of surface motion, which has recently been shown to be an important contributor to SSH errors on sensors using Doppler processing (Buchhaupt et al 2021, Marié et al 2024). We first derive an analytical model for the bias affecting an interferometric SAR instrument like KaRIn in the presence of waves, which are characterized by their four-dimensional (elevation, slopes and vertical velocity) joint probability density functions (PDFs). We use our model to compute the SSB as a function of cross-track distance and doppler beam for a variety of sea state conditions (Significant wave height, wind speed, wind direction, ...). This allows us to quantitatively investigate the impact of instrument characteristics (PTR, pointing...) and wave physics ingredients to the SSB. Our analysis includes the effect of the non-linear interactions and couplings between the slopes and velocities of the large waves, while higher-order effects are left for future work. Specifically, we have derived and used the waves PDFs under second-order Eulerian and first-order Lagrangian approximations. We will present the results of these analyses, comparing and analyzing the dependences of the predicted SSB on cross-track distance, wind speed, wind direction and wave doppler. By understanding the impact of each wave physics ingredient, we aim to progress toward a more complete SSB model that can be used for correction in the long term.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Improving the assimilation of SWOT into Mercator Ocean International global forecasting system

Authors: Ergane Fouchet, Mounir Benkiran, Elisabeth Remy, Pierre-Yves Le
Affiliations: Mercator Ocean International
Ocean analyses and forecasts are crucial for a wide range of applications, including maritime transport, offshore operations, weather prediction, risk assessment, resource management and protection of marine biodiversity. The accuracy of ocean model predictions heavily depends on data assimilation processes, which in turn depend on in situ and satellite observations. A clear understanding of the physical content of both models and observations, along with precise estimation of their respective errors, is critical for improving predictive capabilities. For over 30 years, sea level anomalies (SLA) from conventional nadir altimetry have been widely used to constrain ocean models through assimilation. The Surface Water and Ocean Topography mission, SWOT, launched in 2022, builds on altimetry techniques to produce unprecedented high-resolution, two-dimensional mapping of sea surface height across the global oceans. These observations allow for new detailed spatial analyses of mesoscale to submesoscale ocean processes. Beyond the breakthrough in spatial resolution and spatial coverage, the mission’s 3-month Cal/Val phase, characterized by a one-day repeat cycle, has also provided valuable insights into high-frequency ocean variability, from a temporal perspective. A frequency analysis of the fast-sampling phase demonstrated the presence of both balanced and unbalanced internal tide residuals within the SWOT L3 SLA product, which can dominate the mesoscale signals in certain regions. As the tidal processes are not represented in the model or cannot be constrained through data assimilation, they should be considered as red noise, and the observation error in the system must be adapted accordingly. The Mercator Ocean assimilation system has been adapted to fully leverage the potential of SWOT measurements in the global 1/12° forecasting model. The first tests of SWOT Karin data assimilation (1-day and 21-day phases) have shown a significant improvement in the accuracy of ocean analyses and forecasts. The objective of this study is to enhance the assimilation of SWOT SLA in preparation for their operational integration into the Copernicus Marine Service. This requires a precise understanding of how assimilating high-resolution and high-frequency data impacts the system. To address this, we evaluate the model's analysis and forecasts based on the temporal resolution of the data and the observation error, particularly when internal tide residuals are finely characterized.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: Retrievals of Internal Solitary Wave Amplitudes from SWOT KaRIn observations

Authors: José da Silva, Jorge Magalhaes, Armel Bosser, Renan Huerre, Carina de Macedo, Ariane Koch-Larrouy, Chloe Goret, Camila Artana, Michel Tchilibou, Simon Barbot, Souley Diallo
Affiliations: University of Porto, CIIMAR, Instituto Nacional de Pesquisas Espaciais (INPE), Université Claude Bernard Lyon1, CECI, CERFACS
The Earth's climate is closely linked to the global ocean overturning circulation, which is driven by differences in density and include deep convection in specific areas, connecting surface circulation with deep currents. However, the deep convection observed in the ocean is not enough to maintain the global overturning circulation; a huge amount of mechanical mixing of the colder, denser deep waters with the warmer, less dense surface waters is also required. One of the traditionally accepted mechanisms for this mixing are internal waves, which induce vertical flows of heat and other physical properties of seawater. Therefore, to understand global overturning circulation and parameterize ocean circulation models for predicting future climate scenarios, it is crucial to have a detailed understanding of internal waves. A special class of internal waves, which are usually referred to as Internal Solitary Waves (ISWs) are typically observed in satellite images and in situ data to have time and space scales that are the same as those of many other ocean processes. For instance, they can propagate across basin-scale distances and yet display a highly turbulent character in their propagation, meaning their energetics could naturally link the larger scales of global overturning circulation with the smaller scales in turbulent motion. In addition, ISWs provide (by far) the ocean with its largest vertical velocities (up to 1 m/s) over vertical scales exceeding one hundred meters, producing intense localized mixing. The satellite sensor that excels ISW observations has been the Synthetic Aperture Radar (SAR), for various reasons that include high spatial resolution and operation in all weather conditions. However, sea surface height, being proportional to the ocean pressure field, is more directly relevant for studying ocean interior dynamics than the surface roughness captured in standard SAR systems. The new SWOT mission carries two advanced Ka-band SAR Interferometer antennas (KaRIn) separated by a 10 meters mast in orbit, providing the first two‐dimensional high resolution and low-noise measurement of surface water elevations owing to ISWs from space. Instrumental noise for a footprint 3km wide has a standard deviation 𝜎_3𝑘𝑚 ≤ 0.40 𝑐𝑚 (Chelton, 2024), which is much lower than traditional and SAR altimeters. KaRIn measurements over a swath of 120 km (with a 20 km nadir gap that is sampled with coarse resolution by a conventional altimeter along-track), are converted in image pixels of both surface elevation and roughness with 250 meter resolution over ocean. In theory, the coherent elevations measured by KaRIn allow inference of currents and internal wave amplitudes, at global scale. We will demonstrate how SWOT KaRIn may be used to measure thermocline displacements and ISW current fields making simultaneous use of ocean surface topography and radar backscatter. With our knowledge of vertical density stratification, for example from the ARGO (the Array for Real-time Geostrophic Oceanography) program, and recurring to fully-nonlinear easy-to-use models, such as the Dubreil-Jacotin-Long (DJL) model (Long, 1953), we develop a method to retrieve wave amplitudes from KaRIn measurements. The method is based on both surface elevations measured in sea surface height anomalies (ssha) and modelled internal amplitudes derived from density stratification. For the ocean region off the Amazon shelf, in the tropical Atlantic Ocean, ISW amplitudes retrieved by our method can exceed 120 meters, concurring with independent measurements. In situ measurements recently obtained within the framework of international project AMAZOMIX in the same study region, including deployed equipment (ADCP and thermistor chain moorings) spanning for more than a year, are helpful to support the methodology presented in this paper. Furthermore, we provide evidence that the ssha is not correlated with surface roughness (sigma0), a major concern that needs clarification before we proceed to operational use.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L3)

Presentation: SWOT - a new global ocean radar imager for air-sea interaction applications in synergy with present and future ESA ocean SAR missions

Authors: Justin Stopa, Doug Vandemark, Ralph Foster, Alexis Mouche, Paco Lopez Dekker, Bertrand
Affiliations: The University Of Hawai`i At Manoa
It's been known since the early SEASAT synthetic aperture radar (SAR) mission that radar imaging of the sea surface can reveal a wide range of air-sea interaction processes with applications that span from search and rescue to ocean wave and weather prediction. A new joint NASA/CNES imager aboard the Surface Water and Ocean Topography (SWOT) satellite is now providing the widest global ocean coverage of any earth-observing radar system yet launched. This study describes the central characteristics of ocean radar backscatter imagery collected using SWOT's primary ocean sampling mode including the scope of oceanic, atmospheric, and air-sea interaction phenomena that the radar can resolve. SWOT Ka-band interferometric SAR images differ substantially from previous satellite SAR measurements in four key respects including spatial resolution (500 m), operating frequency (36 GHz), and incidence angle (near-nadir). The final difference is spatial coverage, and this is arguably the most important. SWOT imagery is provided continuously along the satellite track over a nearly 120 km swath opening new opportunities to systematically investigate sub-mesoscale air-sea interaction embedded within synoptic weather systems as well as over regions of strong ocean-atmosphere exchange such as western boundary current systems. We will illustrate and discuss the benefits and limitations of SWOT using an observational investigation of SWOT versus SAR measurements, using this comparison between coincident SWOT and Sentinel-1 C-band SAR WV mode imagery. It is expected that image interpretation of wind-wave signatures is simplified using these low-incidence angle Ka-band data. We will show that SWOT offers several new capabilities to the earth observing system, and provide a first list of potential applications using this new sensor. We will also address how SWOT data may both complement and extend ESA SAR mission datasets, including Sentinel-1 and Harmony, to advance the understanding of submesoscale air-sea interaction processes.
Add to Google Calendar

Tuesday 24 June 14:00 - 16:15 (Hall G2)

Session: C.05.06 Status ESA Mission development: National Programmes managed by ESA - PART 1

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch) and together with industrial/science partners the status of activities related to Mission developments will be provided.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Navigating the Copernicus Data Galaxy: Insights and Innovations from the Copernicus Data Space Ecosystem

We invite all users to join us to explore key achievements, identify upcoming opportunities, and discuss how user feedback drives the continuous enhancement of Earth observation capabilities. In this session, we present the insights and the Earth Observation trends from ESA perspective and future path forward for Copernicus Data Space Ecosystem.
This retrospective and forward-looking discussion will highlight key milestones, recent developments, and upcoming innovations aimed at empowering users worldwide with advanced Earth observation data.
Join us for an in-depth session exploring the evolution, opportunities and future trajectory of the Copernicus Data Space Ecosystem.

Presentations and speakers:


Keynote by ESA and European Commission


  • ESA and European Commission

Copernicus for water monitoring - Ocean Virtual Laboratory


  • Fabrice Collard - OceanDataLab

From Sentinel-1 mosaics to VHR imagery: New data sources and downstream data products in CDSE


  • András Zlinszly - Sinergise

Keynote by EEA


  • Matteo Mattiuzzi - EEA

Keynote by EC-JRC


  • Peter Strobl - EC-JRC

Low-Cost, High-Impact: Advanced Copernicus Data Analysis with openEO on the Cloud


  • Jeroen Dries - VITO Remote Sensing
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Session: A.03.04 Model-data interfaces and the carbon cycle

The increasing provision of the synergistic observations from diverse and complementary EO observations relevant to the carbon cycle underlines a critical challenge in generating consistency in multi-variate EO datasets whilst accounting for the differences in spatial scales, times of acquisition and coverage of the different missions. It also entails the requirement to improve models of the carbon cycle to ensure they can fully exploit the observation capabilities provided by both EO data and enhanced global ground networks. This implicitly means increasing the spatial resolution of the models themselves to exploit the spatial richness of the data sources as well as improving the representation of processes, including introducing missing processes especially those for describing vegetation structure and vegetation dynamics on both long and short timescales, while ensuring consistency across spatial scales (national, regional, global).

Understanding and characterisation of processes in the terrestrial carbon cycle, especially with reference to estimation of key fluxes, requires improved interfaces between models, in situ observations and EO. It also requires research to ensure an appropriate match is made between what is observed on the ground, what is measured from space, their variability in space and time and how processes that explain this dynamism are represented in models and hence to allow the assessment of the impacts of scale in particular how processes, operating at fine scale, impact global scale carbon pools and fluxes. This implicitly involves a close collaboration between the Earth observation community, land surface and carbon modellers and experts in different disciplines such as ecosystems, hydrology and water cycle research.

This session is dedicated to progress in model-data interfaces and the appropriate coupling of EO observations of different types, processes and variables with in-situ observations and models to ensure the observations collectively and the models are consistent and compatible.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: A model-data fusion diagnosis for spatial distribution of biomass carbon and net biome production across the world’s largest savanna

Authors: Mathew Williams, Dr David Milodowski, Dr Luke Smallman, Dr Iain McNicol, Prof Kyle Dexter, Casey Ryan, Prof Gabi Hegerl, Mike O'Sullivan, Stephen Sitch, Dr Aude Valade
Affiliations: University of Edinburgh, University of Exeter, University of Montpellier
Southern African woodlands (SAW) are the world’s largest savanna, covering ~3 M km2, but their carbon balance and its spatial variability are poorly understood. Here we quantify the dynamics of the regional carbon cycle, diagnosing stocks and fluxes and their interactions with climate and disturbance by combining earth observations of components of the C cycle with a process model. We address the following questions: 1. How do fluxes and net exchanges of CO2 vary across the SAW region and covary with climate, fire, and functional characteristics? 2. How do carbon stocks and their longevity covary with climate, fire, and functional characteristics? 3. How does data-constrained analysis of ecosystem C cycling compare to Trendy land surface model estimates for the region? Using 1500 independent 0.5o pixel model calibrations, each constrained with local earth observation time series of woody carbon stocks (Cwood) and leaf area, we produce a regional C analysis (2006-2017). The regional net biome production is neutral, 0.0 Mg C/ha/yr (95% Confidence Interval –1.7 - 1.6), with fire emissions contributing ~1.0 Mg C /ha/yr (95% CI 0.4-2.5). Fire-related mortality driving fluxes from total coarse wood carbon (Cwood) to dead organic matter likely exceeds both fire-related emissions from Cwood to atmosphere and non-fire Cwood mortality. The emergent spatial variation in biogenic fluxes and C pools is strongly correlated with mean annual precipitation and burned area. But there are multiple, potentially confounding, causal pathways through which variation in environmental drivers impacts spatial distribution of C stocks and fluxes, mediated by spatial variations in functional parameters like allocation, wood lifespan and fire resilience. Greater Cwood in wetter areas is caused by positive precipitation effects on net primary production and on parameters for wood lifespan, but is damped by a negative effect with rising precipitation increasing fire-related mortality. Compared to this analysis, LSMs showed marked differences in spatial distributions and magnitudes of C stocks and fire emissions. The current generation of LSMs represent savanna as a single plant functional type, missing important spatial functional variations identified here. Patterns of biomass and C cycling across the region are the outcome of climate controls on production, and vegetation-fire interactions which determine residence times, linked to spatial variations in key ecosystem functional characteristics. The C budgets generated in this analysis can also support more robust and observationally consistent national reporting in the SAW region for the Paris Agreement of the UNFCCC. The detailed resolution of the outputs, with locally valid functional characteristics, can enhance national CO2 emission factors for fire disturbance, for instance. Working closely with national agencies, these approaches could deliver Tier 3 estimates of national C budgets to support countries and climate action world-wide. The application of the SAW approach across the wider dry tropics will be discussed, noting biogeographical variations in diagnostics.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Toward the development of coupled carbon and water cycle land data assimilation in the ECMWF Integrated Forecast System (IFS) by leveraging machine learning and new types of Earth observations

Authors: Sebastien Garrigues, Patricia de Rosnay, Peter Weston, David Fairbairn, Ewan Pinnington, Souhail Boussetta, Anna Agusti-Panareda, Jean-Christophe Calvet, Cedric Bacour, Richard Engelen
Affiliations: ECMWF, CNRM, Université de Toulouse, Météo-France, CNRS, LSCE
The CO2MVS Research on Supplementary Observations (CORSO) project aims at reducing the uncertainties in the land carbon sink estimates which represent the largest source of uncertainties in the global carbon budget. One of the objectives of CORSO is to consistently constraint water and carbon fluxes over land with the assimilation of microwave and solar-induced chlorophyll fluorescence (SIF) satellite observations in the Integrated Forecast System (IFS) developed at ECMWF. Assimilating satellite observations requires an observation operator to predict the model-simulated counterpart of the remotely sensed observation from the model fields. Machine learning (ML)-based observation operators are good alternative to process-based models which are generally computationally expensive, more complex and associated with large uncertainties over land. In this work, we present results on the assimilation at global scale of (1) the normalized backscatter at 40 from ASCAT onboard METOP-B and-C and (2) SIF derived from TROPOMI onboard Copernicus Sentinel-5p, using ML-based observation operators. The work consists in (1) developing a ML-based observation operator for each type of observation to predict the model counterpart of the satellite signal from the IFS model fields at global scale; (2) implement the ML-based observation operators in the IFS to jointly analyse soil moisture and Leaf Area Index (LAI) (3) evaluate the impacts on the forecast of Gros Primary Production (GPP) and low-level meteorological variables (2m temperature and humidity) forecast. Training databases, which consist of the IFS model fields collocated with the satellite observation at the spatial and temporal resolution of the satellite observation (25 km, daily for ASCAT and 10km, 8-d for SIF), were produced to design each ML-based observation operator. Orographic area, snow, frozen soil area and water bodies samples, for which the satellite signal is uncertain or difficult to interpret, were excluded. Features were selected using process-based knowledge and explainability methods (SHAP) to identify the most influent features on the satellite signal. Gradient boosted trees (XGBOOST) and feedforward neural network (NN) models were tested. The IFS model fields used to predict ASCAT backscatter at 40 include soil moisture and soil temperature in the first 3 soil layers (up to 1m depth) and Leaf Area Index (LAI). Latitude and longitude are included as additional features to represent local observation conditions. A NN with 4 hidden layers, 60 neurons model is trained over 2016-2018 period and tested over 2019. For SIF, the predictors were selected from process-based knowledge of the SIF drivers at canopy scale which includes LAI, shortwave downwelling radiation, 2m temperature and humidity, soil moisture, root-zone soil moisture, soil temperature and the fraction of low and high vegetation. For SIF, an XGBOOST model was trained over 2019-2020, tuned over 2021 and tested over 2022. Both SIF and ASCAT ML-observation operators show good performances at global scale with mean absolute error within the expected instrument error. The spatial distributions of the satellite observations are accurately reproduced as well as their seasonal evolution. Performances are slightly lower for SIF compared to ASCAT backscatter, indicating larger uncertainties and lack of information content in the IFS model fields to accurately predict the SIF satellite signal at global scale. The prediction of SIF is generally more accurate over mid-latitude cropland and grassland for which LAI and solar radiation are the main drivers of SIF at the canopy scale. Lower correlation between predicted and observed SIF are reported for tropical rainforest (Amazon, Central Africa) and semi-arid regions (Central Australia). The assimilation of SIF to update LAI in the IFS is conducted in two steps: (1) SIF is first assimilated in the offline Land Data Assimilation System (LDAS) to update the low and high vegetation LAI variables of IFS, (2) the updated LAI variables are used in IFS forecast only experiments to evaluate their impacts on the prediction of carbon fluxes (GPP) and low-level meteorological variables (2m humidity and temperature). The assimilation of SIF provides realistic spatiotemporal patterns of low and high vegetation LAI increments such as the enhancement of the greening of the Sahel region and Western Europe in spring. The updated LAI shows better agreement with the Copernicus satellite LAI product over Northern Eurasia and scattered regions in North and South America, Central Europe and Eastern and Southern Australia. Lower performances are obtained over tropical rainforest (Amazon) and sparse vegetation regions where the prediction of SIF by the ML observation operator is more uncertain. However, the magnitude of the produced increments is too low to have an impact on NWP and carbon flux forecasts. A possible reason is the lack of sensitivity of the observation operator to LAI. The assimilation of ASCAT is directly conducted in the IFS coupled experiments to update both soil moisture and LAI. The implementation of the observation operator in the IFS and the tuning of the data assimilation system (e.g. cross correlation between LAI and soil moisture) is ongoing and results will be presented at the symposium. This work highlights the potential of ML techniques to quickly implement and evaluate the assimilation of new types of observation in NWP models. An important lesson learned is that the evaluation of the prediction performances of the observation operator is not sufficient. Testing the observation operator in the data assimilation system is paramount to verify that it provides enough sensitivity to the analysed variable (here LAI).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Fire carbon emission constraints from space-based carbon monoxide retrievals in the CarbonTracker data assimilation system: a case-study for the 2019 Amazonia dry season

Authors: Anne-Wil van den Berg, Joram Hooghiem, Maarten Krol, Guido van der Werf, Jost Lavric, David Walter, Hella van Asperen, Wouter Peters
Affiliations: Meteorology and Air Quality group, Wageningen University, Institute for Marine and Atmospheric Research, Utrecht University, Acoem GmbH, Multiphase Chemistry Department, Max Planck Institute for Chemistry, Department Biogeochemical Processes, Max Planck Institute for Biogeochemistry, Centre for Isotope Research, University of Groningen
Fires play a key role in the regional Amazonian carbon budget. Accurately quantifying their emissions is crucial for a better understanding of their role in the regional carbon cycle and for emission reporting, monitoring, and verification. Building on the work of Naus et al. (2022), who analysed fire carbon monoxide (CO) emissions in the Amazon between 2003 and 2018, we examine the 2019 dry season. The 2019 dry season was marked by a rapid increase in fire activity and deforestation rates compared to previous years. We show how space-based total column CO retrieval products (XCO from MOPITT, TROPOMI) can complement bottom-up fire carbon estimates and report on the timing, location, and strength of these remote sensing constraints on the 2019 Amazon fires. For the first time, we combine the new GFED5 (beta) fire emission dataset, which uses dynamic savannah emission factors and new higher-resolution burned area data, with XCO retrievals in an atmospheric inversion. We perform a two-step CO inversion using the CTDAS global multi-species inversion framework, with coupled CO/CO₂ budgets inside the TM5 transport model. In this framework, we separately target the different timescales of the budget components of CO and CO₂. The first step (i.e., long-window) constrains month-to-multi-year scale CO variations using flask measurements, and limited subsets of satellite data. The second step (i.e., short-window) is designed to use the high spatiotemporal resolution retrievals of XCO to capture the CO variability related to fire events. This approach allows us to integrate diverse types of datasets to better understand CO and fire emission variability on multiple timescales. Our inversions result in an estimate of the carbon fluxes of an exceptional Amazonian dry season and we quantify the emission contributions of fires in the savannah (Cerrado), tropical forests, and the transition region where deforestation dominates. References: Naus, S, L G Domingues, M Krol, I T Luijkx, L V Gatti, J B Miller, E Gloor, et al. “Sixteen Years of MOPITT Satellite Data Strongly Constrain Amazon CO Fire Emissions.” Atmospheric Chemistry and Physics 22, no. 22 (2022): 14735–50. https://doi.org/10.5194/acp-22-14735-2022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Novel Earth observation data-model fusion approaches reveal dominant role of woody debris in fire emissions in the Amazon and Cerrado

Authors: Matthias Forkel, Christine Wessollek, Vincent Huijnen, Niels Andela, Jos de Laat, Daniel Kinalczyk, Christopher Marrs, Dave van Wees, Ana Bastos, Dr. Philippe Ciais, Dominic Fawcett, Johannes Kaiser, Carine Klauberg, Erico Kutchartt, Rodrigo Leite, Wei Li, Carlos Silva, Stephen Sitch, Jefferson Goncalves De Souza, Sönke Zaehle, Stephen Plummer
Affiliations: TUD Dresden University of Technology, Royal Netherlands Meteorological Institute (KNMI), BeZero Carbon Ltd., Vrije Universiteit, Universität Leipzig, Laboratoire des Sciences du Climat et de l'Environnement, Swiss Federal Institute for Forest Snow and Landscape Research WSL, Klima- og miljøinstituttet NILU, University of Florida, Forest Science and Technology Centre of Catalonia (CTFC), University of Padova, NASA Goddard Space Flight Center, Tsinghua University, University of Exeter, Max Planck Institute for Biogeochemistry, European Space Agency, ESRIN
Emissions of greenhouse gases and air pollutants from wildfires are produced by the interplay of the chemical composition of vegetation fuels, fuel moisture, fire behaviour, and burning conditions. Established and operational Earth observation-based fire emission approaches over-simplify those processes by representing fuels in terms of simplified biome maps or by using fixed emission factors that relate the burned biomass to emissions of specific trace gases. Our aim was to better represent the complexity of fuels or burning conditions in the quantification of fire emissions by making use of several European Earth observation products. Therefore, we developed within the ESA-funded Sense4Fire project several approaches to quantify fire emissions. First, we adapted the Global Fire Atlas approach by taking active fire observations from VIIRS to map fire spread and size and to classify several fire types such as forest fires, savannah fires, deforestation (GFA-S4F). Second, we developed a satellite data-model fusion approach for fuels loads, fuel moisture, fuel combustion and fire emissions (TUD-S4F). TUD-S4F integrates Leaf Area Index time series from Proba-V and Sentinel-3, land cover and biomass maps from ESA-CCI, soil water index from ASCAT and burned area maps in a simple model of ecosystem fuel carbon and fuel moisture pools. Additionally, the model is calibrated against satellite observations of canopy height from GEDI, fire radiative energy derived from VIIRS, live fuel moisture content from VOD2LFMC, above-ground biomass (ESA CCI) and against databases of field and laboratory measurements on fuel moisture, litter loads, fuel consumption and emission factors. Unlike other fire emission approaches, TUD-S4F computes emission factors dynamically based on the chemical composition (i.e. lignin, cellulose, volatiles) of different fuels (litter, woody debris, herbaceous and woody biomass). Third, we made use of Sentinel-5p TROPOMI observations to estimate carbon monoxide (CO) and nitrogen oxide (NOx) emissions in a top-down approach in order to benchmark the GFA-S4F and TUD-S4F emission estimates (KNMI-S5p approach). In addition, we also use the IFS-COMPO atmospheric chemistry model to simulate the transport and distribution of the fire emission estimates in the atmosphere and then to validate the fields of CO with observed fields from TROPOMI. We applied all approaches in the Amazon and Cerrado, in southern Africa, southern Europe, and in a region in eastern Siberia. However, we here describe the results for the Amazon and Cerrado for the year 2020 (large fire years) and for the year 2024 (extreme fire year). The CO and NOx emissions from GFA-S4F and TUD-S4F approaches agree well with the top-down estimate from S5p for the year 2020. For the main fire season from August to October 2020, CO emissions are 43.7 Tg in TUD-S4F and 41.6 Tg in GFA-S4F which both correspond to the KNMI-S5p estimate of 43.6 Tg (with an uncertainty estimate of 25%). Those uncertainties are much smaller than the range of emissions estimates from other established fire emission approaches (27 to 49.7 Tg CO, 8 approaches) and dynamic global vegetation models (16.8 – 57.1 Tg CO, 3 models). In the extreme fire year 2024, GFA-S4F and TUD-S4F agree well with atmospheric fields of CO from Sentinel-5p in August. CO emissions from the operational Global Fire Assimilation System (GFAS) show a strong underestimation of atmospheric CO. In September 2024, also GFA-S4F and TUD-S4F emissions show an increasing underestimation atmospheric CO, which is however, much smaller than from GFAS. The uncertainties in fire emission estimates mainly originate from understorey forest fires and deforestation fires while all approaches show higher agreement for savannah fires. By using the TUD-S4F approach, we further investigated the contribution of different fuels to fire emissions: 75% of the total regional fire emissions over the Amazon and Cerrado originate from the burning of woody debris, with higher contribution of woody debris in forest and deforestation fires than in savannah fires. A validation with field data shows that TUD-S4F can represent the biome-level spatial patterns in woody debris loads. Woody debris loads are the main factor that affect the spatial patterns of emission factors. From the computation of dynamic emission factors in TUD-S4F, we derive an increasingly incomplete combustion, i.e. more smouldering fires, with increasing loads of woody debris. The statistical distribution of emission factors corresponds to the distribution which reported from field and laboratory measurements. Based on those findings we hypothesise that the underestimation of CO in September 2024 originates from an under detection of low-temperature smouldering combustion that occurs after the initial burning detected by active fire detections. Our results emphasise how novel Earth observation approaches of fuel and fire dynamics and of atmospheric trace gas observations reduce uncertainties of fire emission estimates and help to diagnose the representation of fuels, wildfire combustion and its effects on atmospheric composition in fire emission approaches and in global vegetation-fire models. Datasets of fire emissions from the approaches developed in Sense4Fire (TUD-S4F, GFA-S4F and KNMI-S5p) are available at https://sense4fire.eu/
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: A Mechanistic Model-Data Approach to Understand the Global Pattern of the Ocean’s Biological Carbon Pump’s Transfer Efficiency

Authors: Anna Rufas, Samar Khatiwala
Affiliations: University Of Oxford
The ocean’s biological carbon pump (BCP) transfers large amounts of carbon from the atmosphere into the ocean’s interior, contributing to oceanic carbon sequestration. Through biologically-mediated processes, the BCP generates sinking particles that transport carbon from the surface to the deep ocean (>1000 m), where it can be sequestered long-term. As anthropogenic CO₂ emissions rise, understanding how much of the BCP-generated particulate organic carbon (POC) flux reaches the deep ocean has become increasingly important. Despite significant advances in observational and modelling capabilities over the past decade, the mechanisms controlling the transfer efficiency of POC flux to the deep ocean (Teff) remain poorly understood. Here, we integrate ESA’s satellite-derived surface ocean carbon data with a novel stochastic particle tracking model developed within the BCP framework, extending the satellite-based representation of surface carbon to the ocean interior. Our goal is to understand the marine particle dynamics, surface ocean ecosystem structure and the environmental factors that control the global patterns of Teff. The model tracks discrete Lagrangian marine particles as they mechanistically interact with their biogeochemical environment –through processes such as phytoplankton photosynthesis, zooplankton grazing, egestion, heterotrophic remineralisation, dissolution and solubilisation– and with other particles –through aggregation and disaggregation–, as they sink through the water column. These particles represent various forms of biological material, including living and dead phytoplankton, zooplankton faecal pellets, dead zooplankton and a combination of those, which aggregate through sorption and are aided by sticky transparent exopolymer carbon. We validate the model at six data-rich ocean locations, using observations of three particulate tracers (POC, particulate inorganic carbon, and biogenic silica) and the vertical distribution of particle numbers by size class. The model successfully reproduces local patterns and is applied globally. Our results show that phytoplankton community composition and grazing dynamics significantly influence Teff, challenging the conventional focus on temperature as the primary control.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.34)

Presentation: Satellite-constrained Dynamic Global Vegetation Models for a Near-real time Global Carbon Budget

Authors: Mike O'Sullivan, Stephen Sitch, Jefferson De Souza Goncalves, Philippe Ciais, Ana Bastos, Myriam Terristi, Sonke Zaehle, Wei Li, Luiz Aragao
Affiliations: University of Exeter, LSCE, University of Leipzig, MPI BGC, Tsinghua University, INPE
Understanding the terrestrial carbon cycle is critical for assessing climate impacts and guiding mitigation strategies. This work advances dynamic global vegetation model (DGVM) simulations with near real time (NRT) capability to evaluate recent extreme events and their effects on carbon fluxes. By prescribing satellite-derived burned areas in DGVMs, we improve the representation of global fire emissions’ magnitude and trends while also capturing the full carbon cycle dynamics, including legacy emissions and subsequent regrowth—processes that products like GFED do not provide. Integrating satellite-based observations for burned area enhances model performance, reducing uncertainties in fire emissions and also improves simulated biomass stocks and trends. This work stems from several ESA projects (NRT Carbon Extremes, RECCAP2, and EO LINCS) and demonstrates our growing ability to constrain recent major events in the carbon cycle with EO-DGVM synergy. Here we focus on the 2024 extreme events in Brazil, which experienced unprecedented forest degradation carbon losses, driven by large-scale drought conditions and widespread fires. A key strength of our methodology is that DGVMs allow us to attribute flux anomalies to key processes, such as net primary productivity, soil respiration, and fire activity, as well as study post-disturbance dynamics. Unlike traditional national inventory approaches, this method provides rapid, low-latency insights into carbon flux variability after extreme events, offering a critical advantage for policymakers. By delivering timely inputs to frameworks like the Paris Agreement’s Global Stocktake and national greenhouse gas inventories (NGHGIs), this approach supports better tracking of climate mitigation progress. The integration of satellite observations into DGVMs bridges the gap between data and actionable policy, providing a comprehensive view of carbon losses, recovery, and trends. This work paves the way for operational NRT carbon monitoring systems, crucial for managing ecosystems and responding to extreme events. By combining process based models with Earth Observation (EO) data, we take a significant step forward in understanding and managing the terrestrial carbon cycle, supporting ambitious climate targets in an era of rapid environmental change.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall G1)

Session: F.05.10 50 Years of ESA/100 year Roy Gibson, Session - Roy Gibson - The Golden Age of EO

Session Agenda:
1. Welcome by Josef Aschbacher, Director General ESA
2. Message from Roy Gibson read by Volker Liebig, former Director EO, ESA, Institute of Space Systems, University of Stuttgart
3. Roy Gibson and Earth Observation by Stephen Briggs, Reading University, Department of Meteorology
4. Earth Observation Ground Breaking Science Discoveries by Maurice Borgeaud, Chair Earth Science Panel, European Space Science Committee
5. Discussion on what it means to continue the Golden Age of Earth Observation introduced by Simonetta Cheli, Director EO Programmes ESA

Speakers:


  • Dr. Josef Aschbacher - Director General ESA
  • Prof. Volker Liebig - Honorary Professor, Institut of Space Systems, University of Stuttgart, former EO Director, ESA
  • Prof. Stephen Briggs - Visiting Professor, Reading University, Department of Meteorology, Cambridge University, Department of Chemistry
  • Prof. Maurice Borgeaud - Chair Earth Science Panel European Space Science Council (ESSC), former Head of Science, Applications and Climate Activities, ESA
  • Dr. Simonetta Cheli - Director Earth Observation Programmes, ESA
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 2

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Testing AVIRIS-4 for Monitoring Grassland Biodiversity Through Imaging Spectroscopy

Authors: Tiziana Koch, Christian Rossi, Andreas Hueni, Marius Vögtli, Maria J. Santos
Affiliations: University Of Zurich, Swiss National Park
Advances in remote sensing technologies, particularly airborne imaging spectroscopy, offer great opportunities for biodiversity monitoring. The recent improvements in airborne sensors and their processing pipelines, such as for the Airborne Visible InfraRed Imaging Spectrometer 4 (AVIRIS-4), provide enhanced data quality, which are crucial for capturing fine-scale biodiversity and ecosystem dynamics in complex environments. In this study, we present the first application of AVIRIS-4 data to measure biodiversity in alpine grasslands, focusing on the Swiss National Park, a hotspot for ecological research and conservation. AVIRIS-4, operated by the Airborne Research Facility for the Earth System (ARES) platform at the University of Zurich, acquires image data across a broad spectral range (380–2490 nm) with a spectral sampling of 7.5 nm and a (sub-)meter scale spatial resolution, enabling detailed analysis of key grassland plant community traits and ecological processes. We leverage this technology to investigate relationships between spectral signatures and biodiversity metrics in alpine grassland ecosystems. The airborne data, collected under optimal cloud-free conditions, were pre-processed using a comprehensive state-of-the-art reflectance retrieval workflow that incorporates corrections for atmospheric and topographic effects. We then compare AVIRIS-4 data with comprehensive in situ ecological data collected from 80 grassland plots during the summer of 2024. Field measurements include biomass and canopy spectral reflectance measurements using field spectrometers. The biomass samples have been weighted and analyzed in the laboratory for various plant functional traits, such as nitrogen, potassium, and lignin content. We then use Partial Least Squares Regression (PLSR) to model the relationships between reflectance measurements from the airborne sensor and in situ measurements, including handheld spectroscopy and the chemical analysis of plant traits. Our preliminary results demonstrate the capability of AVIRIS-4 data to accurately map grassland plant trait distributions at high spatial resolution, thereby offering new opportunities for assessing biodiversity, its change and contribute to monitoring at the landscape scale. By integrating high-resolution imaging spectroscopy data with ground-based observations, we provide a robust framework for assessing biodiversity in alpine grasslands. This approach allows us to examine how well AVIRIS-4 can predict key ecological traits that are indicative of biodiversity patterns, as this is particularly important in the context of ongoing environmental changes, where timely and precise monitoring is essential. Moreover, since AVIRIS-4 is being used as a precursor for the upcoming Copernicus Hyperspectral Imaging Mission (CHIME), our findings have broader implications. The relationships established in this study, particularly with regard to plant trait upscaling, can be used to inform future spaceborne missions, enabling global biodiversity assessments and advances the use of remote sensing in conservation and ecosystem management.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Dynamics as foundation of riverine biodiversity: towards system scale analysis of the dynamic interaction between hydromorphology and vegetation controlling ecosystem functioning and services in river corridors

Authors: Florian Betz, Baturalp Arisoy, Magdalena Lauermann, Rafael Schmitt, Prof. Dr. Tobias Ullmann
Affiliations: University of Würzburg, Catholic University Eichstätt-Ingolstadt, University of California Santa Barbara
Despite only covering 2% of the earth surface, freshwater ecosystems are the home of approximately 10-12% of all described species. Key driver of this high biodiversity is the inherent dynamic of river corridors (i.e. rivers and their floodplains) with steep gradients of hydro-geomorphic disturbance creating a diverse mosaic of habitats. Today, the biodiversity of river corridors is critically endangered and is declining at a high rate. According to the WWF Living Planet Index, the biodiversity of freshwaters has declined by 83% since the 1970s, which is more than twice as much as the decline in biodiversity in terrestrial ecosystems. To support nature-positive decision making and the conservation as well as restoration of river corridors, their comprehensive assessment of structure and dynamics across a range of spatial and temporal scales is required. However, current studies tend to either focus on small scales or oversimplify the role of river dynamics for maintaining ecosystem functioning. In this study, we use the Aral Sea basin in Central Asia as an example to demonstrate the use of state-of-the-art, cloud computing enabled satellite remote sensing and digital geomorphometry for understanding river corridor structure and dynamics at the scale of the basin’s entire network of major rivers spreading along more than 15.000 km. We introduce innovative methods such as a spaceborne LiDAR and satellite time series driven approach for unsupervised classification of riparian habitats for large scale studies in data-scarce regions. In addition, we leverage the entire Landsat archive to analyze hydrologic and geomorphic dynamics of the entire river network. Using field UAV and field mapping derived ground truth data along with the Clay foundational model enables us to accurately predict grainsize patterns of the surfaces, soil moisture as well as vegetation structure and biomass. This allows for instance to assess the provision of rejuvenation habitat and gross primary productivity as two fundamental ecosystem functions crucial for the long-term sustainable provision of ecosystem services. The results show that the river network of the Aral Sea basin has a significant heterogeneity. In the upland part, near-natural braided and braided-anastomising rivers dominate with high inundation dynamics and high morphodynamics. These river segments are associated with a high degree of habitat diversity and large primary productivity. In the significantly modified river segments of the lower Aral Sea basin, the habitat heterogeneity decreases, also gross primary productivity is significantly lower compared to the upstream segments. These differences can be clearly attributed to anthropogenic modifications of the river corridors. Beyond the case study level, our study paves the way for a quantitative, spatial-temporal perspective on rivers and their floodplain which wouldn’t be feasible without leveraging the potential of state-of-the-art remote sensing basing upon dense satellite time series, cloud computing and recent advances in foundational deep learning models. Our remote sensing approach enables scientists and practitioners to better understand the role of complex feedbacks between hydrologic, geomorphic and ecologic processes forming the basis of riverine biodiversity. Therefore, it supports informed decision making on system scale enabling nature-positive decision making on the path to implement dynamic process-oriented targets in river conservation and restoration.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: A Europe-Wide Analysis Integrating Soil Biodiversity and Earth Observation-Derived Indicators

Authors: Nikolaos Tsakiridis, Ms Eleni Kalopesa, Dr Nikiforos Samarinas, Prof Emeritus George Zalidis, Prof Christian Mulder, Prof Maria Tsiafouli
Affiliations: Aristotle University Of Thessaloniki, Inter-Balkan Environment Center - Green Innovation Hub, University of Catania
Introduction: Understanding the complex interplay between soil biodiversity (SOB) and ecosystem state and services is critical for advancing sustainable land management and mitigating biodiversity loss. This study aims to increase understanding on how soil biodiversity measured at the sample/field scale is linked to ecosystem/environmental descriptors measured at landscape scale through EO. Data: The study focuses on soil nematodes, a taxonomically and functionally diverse group that is representative of the entire soil food web. Specifically, we used open access nematode community data [1] and calculated concrete biodiversity indicators such as trophic/functional diversity and metabolic footprints as proxies for functional processes from across Europe (ca. 1800 samples). These data were then integrated with Earth Observation (EO)-based high-resolution geospatial data, i.e., reflectance bands and NDVI and EVI from MODIS, topography, land use (CORINE land cover), topsoil health descriptors, soil temperature offsets (SoilTemp) [2] and other climatic variables. Methods: AI-driven modeling approaches were used to examine how nematode community metrics correlate with EO-driven data and how these relationships vary across land uses, spatial and temperature gradients. To this end, we examined the popular Random Forest and XGBoost regressors which were optimized using a 5-fold grid search approach, while we also examined the use of a-priori feature selection mechanisms. Results: The first results demonstrate that EO-derived spatial indicators were critical for scaling field observations and capturing spatial variability in soil biodiversity, highlighting the value of combining in situ biodiversity measurements with EO technologies. In particular, Pearson’s correlations indicated a relevance between abundance of trophic groups and mean temperatures of wettest and driest quarters derived from SoilTemp (ρ ~= 0.18), with MODIS reflectance data and vegetative indices (ρ ~= 0.25), and C stock (ρ ~= 0.20). The SoilTemp data and particularly the temperate of warmest quarter and month exhibited the highest correlation with the production and respiration components of the metabolic footprint (ρ ~= 0.18). The best AI-models attained an R2 of ~0.85 in the training set and a mean R2 of ~0.40 in the out-of-fold validation sets. Final remarks: The study provides first insights into a scalable and replicable framework for upscaling our understanding of soil biodiversity and its links to ecosystems state and services, offering valuable insights for policymakers and land managers aiming to address biodiversity loss and enhance ecosystem sustainability across Europe. In the future, we aim to utilize the SOB4ES dataset to augment this data with more points from across the European continent and apply this methodological framework to it. Another research avenue is to utilize other Copernicus EO data and potentially products from the upcoming Vegetated Land Cover Characteristics category. References [1] van den Hoogen, J., Geisen, S., Routh, D. et al. Soil nematode abundance and functional group composition at a global scale. Nature 572, 194–198 (2019). https://doi.org/10.1038/s41586-019-1418-6 [2] Maclean, I.M.D., Suggitt, A.J., Wilson, R.J., Duffy, J.P. and Bennie, J.J. (2017), Fine-scale climate change: modelling spatial variation in biologically meaningful rates of warming. Glob Change Biol, 23: 256-268. https://doi.org/10.1111/gcb.13343 Keywords: soil biodiversity, ecosystem services, Earth Observation, SOB4ES, nematodes, metabolic footprint, AI modeling, European soil data. Acknowledgement: This work has received funding from the European Union’s Horizon Europe programme under the project SOB4ES (Grant agreement no 101112831).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Vegetation Dynamics in an Alpine Protected Area, the Gran Paradiso National Park (NW Italy) from a Remote Sensing Perspective

Authors: Chiara Richiardi, Dr. Consolata Siniscalco, Maria Patrizia Adamo
Affiliations: Laboratory Biodiversity and Ecosystems, Division Anthropic and Climate Change Impacts, ENEA, Department of Life Sciences and Systems Biology, University of Torino, National Research Council (CNR), Institute of Atmospheric Pollution Research (IIA), c/o Interateneo Physics Department,
Alpine ecosystems are highly sensitive to environmental changes, making long-term monitoring essential for biodiversity conservation and ecosystem management. This study presents an analysis of 39 years (1985–2023) of vegetation dynamics in the Gran Paradiso National Park (GPNP), the oldest protected area in Italy. Using multispectral Landsat imagery (Landsat 4–9) at a 30 m resolution, we examined land cover changes with a focus on ecological and climatic drivers. Seasonal composite images were developed for the growing (June 15–August 31) and senescence (September 15–November 30) seasons, employing a refined Best Available Pixels (BAP) methodology. Terrain-corrected images were processed using the improved cosine algorithm and snow/cloud masks were applied to enhance data quality. Eight land cover types were classified using a Random Forest algorithm, trained with a dataset built starting from the high-resolution (0.5 m) land cover cartography provided by GPNP. Rigorous validation was conducted using confusion matrices and independent photointerpreted datasets, yielding consistently high accuracy (Overall Accuracy >96%; Cohen’s Kappa >0.90). Key spectral indices, such as the Enhanced Vegetation Index (EVI) and the Normalized Difference Snow Index (NDSI), and topographic variables derived from Digital Terrain Models (DTMs) were instrumental in improving classification performance. Results highlight significant land cover trends: a loss of grasslands (-10 ha/year), largely due to shrub encroachment (+10 ha/year), and the expansion of rocky habitats (+8.6 ha/year), likely driven by glacier retreat. These patterns varied across altitudinal zones, with grassland loss most pronounced in the subalpine belt (1900–2300 m a.s.l.) and shrub encroachment prevalent at mid-elevations. Spatial analysis revealed distinct regional dynamics, with the Piedmont side experiencing greater grassland declines than the Aosta Valley. Change detection identified three pixel categories: stable (65%), mixed (18%), and transitional (17%), providing insights into vegetation stability and transition processes. Shrublands exhibited the lowest stability (19%), reflecting their high sensitivity to climate and land-use changes. Further analysis showed a strong correlation between land cover dynamics and anthropogenic drivers, such as land abandonment, alongside climatic factors, including snow cover duration (SCD) and glacier retreat. This study offers novel insights into the mechanisms shaping alpine landscapes under environmental and anthropogenic pressures. It underscores the critical role of remote sensing for long-term monitoring of protected areas and biodiversity, highlighting its relevance for conservation policies and adaptive management strategies. These findings contribute to advancing Earth observation methodologies, demonstrating their potential for scaling up to other alpine and protected regions globally.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Woody Cover Dynamics in Land-Water Interfaces Across Pan-Europe (1990–2024)

Authors: Xiang Liu, Dr Matthias Baumann, Dr Sonja Jähnig, Tobias
Affiliations: Humboldt University of Berlin, Leibniz-Institute of Freshwater Ecology and Inland Fisheries
The land-water interface (LWI) is a critical ecological transition zone where terrestrial and aquatic ecosystems converge, playing a pivotal role in maintaining biodiversity, regulating hydrological processes, and mitigating flood risks (Bänziger, 1995). However, these areas face mounting threats from habitat fragmentation, ecological degradation (Dreyer & Gratton, 2014), and public health risks such as zoonotic disease spread (Karr & Schlosser, 1977). Climate change and intensified flooding are also reshaping the structure and function of LWIs, impacting vegetation dynamics and ecosystem resilience. Understanding these changes, particularly concerning woody cover dynamics, is essential for informed conservation and land-use management. This study presents the first high-resolution (30 m) map of the LWI across Europe, developed using integrated Digital Elevation Models (DEM) and remote sensing data. LWIs cover 11.51% of Pan Europe’s total area, with significant spatial variability. Northern and Eastern Europe exhibit dense, natural LWIs dominated by wetlands and riparian zones, while Western and Southern Europe show extensive fragmentation driven by urbanization and agricultural expansion. Analysis reveals that climate-influenced flooding patterns contribute to the persistence of natural LWIs in certain regions while exacerbating degradation in others. From 1990 to 2024, woody cover within LWIs exhibited significant net growth, with 441,368.7 km² showing marked increases. Northern and Eastern Europe saw the most pronounced gains, driven by rewilding, natural regeneration, and conservation efforts. However, urbanized and agricultural LWIs experienced limited increases due to intensive land use and reduced ecological connectivity. These findings underscore the dual influence of conservation policies and the increasing variability of flooding and climatic conditions on vegetation recovery and ecosystem resilience. By integrating spatial and temporal analyses, this study comprehensively assesses LWI extent, woody cover dynamics, and the impacts of land use change, climate, and flooding. The findings offer critical insights for mitigating ecological and socio-economic risks, enhancing flood resilience, conserving biodiversity, and guiding sustainable development in Europe’s land-water boundaries.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.96/0.97)

Presentation: Tracking Lake Phytoplankton Blooms: A Global Remote Sensing Approach

Authors: Jelle Lever, Stefan Simis, Dr. Luis J. Gilarranz, Dr. Petra D’Odorico, Christian Ginzler, Dr. Achilleas Psomas, Prof. Dr. Alexander Damm, Arthur Gessler, Dr. Yann Vitasse, Dr. Daniel Odermatt
Affiliations: Swiss Federal Research Institute WSL, Swiss Federal Research Institute Eawag, Plymouth Marine Laboratory, University of Zürich
The impacts of climate change, eutrophication, and other anthropogenic factors on the timing, duration, intensity, and spatial extent of lake phytoplankton blooms are a growing global concern. Changes in these key bloom characteristics are especially problematic because they may form a threat to water quality and aquatic ecosystems. This, in turn, can lead to economic losses, health hazards, reduced drinking water quality, and, in some cases, toxicity to aquatic life. These properties are, therefore, critical indicators of how lake ecosystems are responding to environmental stressors. The high inter-annual variability in bloom dynamics, however, makes it difficult to detect consistent trends across years and attribute changes to specific drivers. In addition, regional variability in the underlying environmental factors – such as temperature, radiative forcing, nutrient loading, and hydrology – further complicates this analysis. Therefore, reliable global data across an extensive time period are needed alongside robust analytical methods to better understand these dynamics and inform effective management strategies. The goal of this study is to develop a comprehensive global dataset that allows for the analysis of multi-decadal change in bloom properties across a wide range of biogeographic and environmental conditions. To this end, we analyze data from 2,024 lakes across the globe. The satellite data used in this analysis are derived from the Medium Resolution Imaging Spectrometer (MERIS) and the Ocean and Land Colour Instrument (OLCI) on European Space Agency (ESA) satellites, covering the years 2002-2012 and 2016-2022, respectively. By analysing daily chlorophyll-a estimates from these sensors, we extract phenology metrics that represent the onset and decline of peak chlorophyll-a concentrations, as well as the magnitude of fitted timeseries for each pixel, among other properties. Using these metrics as a basis, we then proceed with identifying bloom events at the lake scale. This involves detecting clusters of chlorophyll-a peaks across pixels and years that occur during the same seasonal period, which are indicative of recurring bloom events. Ultimately, this enables us to obtain information on changes in the characteristics of recurring phytoplankton blooms – particularly their timing, duration, and extent on a lake level for the above-mentioned time periods. The large number of lakes analyzed with consistent methods, provide a solid basis for further research. This research could, for example, attempt to disentangle the relative effects of different environmental drivers and the interplay between them. By incorporating environmental data such as temperature, precipitation, and nutrient concentrations, we may be able to understand the relative contributions of climate change, land use change, and other anthropogenic factors to the observed trends in bloom dynamics. This knowledge will be crucial for guiding policy decisions aimed at mitigating the impacts of harmful algal blooms, improving water management practices, and protecting freshwater ecosystems. Moreover, by providing a global perspective on algal bloom dynamics, our research will contribute to the growing body of knowledge on the intersection of climate change, eutrophication, and aquatic ecosystem health. In conclusion, our study underscores the importance of satellite remote sensing in advancing our understanding of global lake phytoplankton bloom dynamics. By tracking lake phytoplankton bloom characteristics, we aim to provide critical insights that will help inform management strategies at local, regional, and global scales. This work is a step toward better quantifying the impacts of environmental change on freshwater ecosystems and developing more effective policies to mitigate the threats posed by harmful algal blooms.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Session: A.08.03 Ocean Salinity

Ocean salinity is a key variable within the Earth’s water cycle and a key driver of ocean dynamics. Sea surface salinity (SSS) has been identified as Essential Climate Variable by the Global Climate Observing System (GCOS) and Essential Ocean Variable by the Global Ocean Observing System (GOOS). Through the advent of new observing technologies for salinity and the efforts to synthesize salinity measurements with other observations and numerical models, salinity science and applications have significantly advanced over recent years.
This Session will foster scientific exchanges and collaborations in the broad community involved in ocean salinity science and applications, widely encompassing satellite salinity (eg, SMOS and SMAP) data assessment and evolution, multi-mission merged product generation (eg, CCI-salinity), exploitation of in-situ assets for calibration and validation and related Platforms (eg, Salinity PI-MEP) and ultimately broad salinity-driven oceanographic/climatic applications and process studies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: ESTIMATING SEA SURFACE SALINITY IN COLD SEAS WITH CRYORAD 0.4-2GHZ WIDEBAND RADIOMETER

Authors: Jean-Luc Vergely, Dr Jacqueline Boutin, Stéphane Ferron, Dr Marie-Laure Frery, Dr Giovanni Macelloni, Dr Marco Brogioni, Eric Jeansou, Veronique Bruniquel
Affiliations: ACRI-ST, CNRS, CNES, CNR-IFAC
Salinity in polar oceans is changing. Sea ice melt and increased continental runoff are responsible for a decrease of sea surface salinity (SSS) in most regions of the Arctic Ocean. In the Southern Ocean, recent changes of Antarctic sea ice extent and thickness are also prone to modify SSS, and to increase the upper ocean stratification. These changes have strong implications for the oceanic circulation, for the ocean’s capability to absorb atmospheric heat and carbon, with large consequences for Earth’s climate. A particularly important aspect is the SSS influence on the collapse of the Atlantic meridional overturning circulation, whose timing could be earlier than predicted by climate models. Improved SSS estimates in polar seas are required to monitor the evolution of freshwater fluxes at ocean boundaries (sea ice melting and formation, river runoff, precipitation effects), the variability of surface hydrography that controls deep water formation and overturning circulation, exchanges with other ocean basins and their impact on the global climate. The current generation of climate models poorly reproduces high-latitude water mass properties because of their crude representations of physical processes such as lateral mixing, convection, and entrainment (especially in the marginal ice zone). These limitations impair the modeled response to climate change. SSS is recognized as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS) and as an Essential Ocean Variable by the Global Ocean Observing System (GOOS). Current 1.4 GHz (L-band) radiometer missions have provided unprecedented SSS measurements over the global ocean at 40-150 km scales with a revisit time of 3 to 8 days, and continuity of observations is recognized as a high priority that will be partially addressed by the CIMR Copernicus mission. However, for cold waters, the sensitivity of the L-band brightness temperature to SSS decreases (roughly by a factor 3 between 30°C and 0°C), leading to greater uncertainties in polar SSS. The CryoRad mission selected as an ESA EE12 mission candidate includes a radiometer covering an extended frequency range between 0.4 and 2 GHz, one of the aims of which is to improve the accuracy of SSS measurements in cold waters by at least a factor of 2 compared with L-band measurements. As part of the CNES ‘Salinity estimate in cold seas using multiband 0.4-2GHZ’ research and technology study (R&T DTN/CD/AR-2024.0009418), and of the ESA CryoRad Earth Explorer 12 Phase 0 Science and Requirements Consolidation Study (SciReC) (ESA 4000145903/24/NL/IB/ar), we performed simulations based on a CryoRad simplified instrument model in order to demonstrate CryoRad's contribution to SSS estimation at high latitudes. We simulate SSS retrieval uncertainties taking into account various contributors to the radiometric measurements, such as sea surface temperature, wind speed, atmosphere influences, as derived from radiative transfer model elements well validated at L-Band and propagated to lower frequencies using physically based considerations. This simulator is used to carry out an initial sensitivity study for level 2 and level 3 salinity estimation. We will present the way in which this simulator has been implemented (direct model, inverse model and inversion strategy) and the performance obtained in estimating the SSS in the frame of an academic study.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Ocean-induced magnetic field: Spatio-temporal characteristics and sensitivity to ocean flow and salinity

Authors: Jakub Velímský, Ondřej Kureš, Veronika Ucekajová, Christopher Finlay, Clemens Kloss, Rasmus Møller Blangsbøll
Affiliations: Department of Geophysics, Faculty of Mathematics and Physics, Charles University, Department of Space Research and Technology, Technical University of Denmark
Satellite magnetic field observations have the potential to provide valuable information on dynamics, heat content and salinity throughout the ocean. Here we present the expected spatio-temporal characteristics of the ocean-induced magnetic field at satellite altitude on periods of months to decades. To characterize the expected ocean signal we make use of advanced numerical simulations taking high resolution oceanographic inputs and solve the magnetic induction equation in 3D including galvanic coupling and self induction effects. We compare the magnetic field calculated for several different ocean models, and isolate spatio-temporal features which are consistent across the inputs. We also investigate the sensitivity of the ocean-induced magnetic field to the sea surface salinity constrained by satellite observations (CCI+SSS).
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Maritime continent water cycle as a key forcing for decadal variation of upper-ocean salinity in the southeast Indian Ocean

Authors: Tong Lee, Dr. Sreelekha Jarugula, Dr. Ou Wang, Severine Fournier
Affiliations: NASA Jet Propulsion Laboratory
Argo measurements illustrate pronounced decadal variation of salinity in the southeast Indian Ocean (SEIO) that is coherent in the upper 200 m, with freshening from the mid-2000 to the early 2010s followed by salinification afterwards. The SEIO decadal salinity variation contributed to over half the magnitude of decadal sea level variation in this region. Sea surface salinity (SSS) from SMOS capture the SEIO salinification after the early 2010 with much better-defined spatial structure than that depicted by Argo. SMOS data also reveal the linkage of the decadal SSS signal in the SEIO with that in the maritime continent, which is not sampled by Argo. Previous studies suggested several possible factors contributing to SEIO decadal salinity signal: SEIO local winds, remote winds in the tropical Pacific forcing the Indonesian throughflow (ITF) that advects salinity signal into the SEIO, SEIO local evaporation-precipitation (E-P), remote E-P in the maritime continent. These studies did not agree on a key forcing mechanism. In particular, a recent study suggested that SEIO local wind stress is the key forcing mechanism. However, the finding was based on association of forcing with salinity variability without demonstrating a causality. Here, we attribute decadal variation of SEIO salinity by isolating the contributions of E-P and wind stress forcings through forcing sensitivity experiments using the ECCO ocean modeling and state estimation system (https://ecco.jpl.nasa.gov). Our causality analysis reveals that maritime continent E-P is the key forcing for decadal variation of SEIO salinity. Decadal variation in winds, suggested by some previous studies, play little role. Therefore, the climatological ITF (forced by climatological winds) carrying decadal variation of freshwater content from the maritime continent into the SEIO is the main oceanic process that transmits maritime continent water cycle effect to the SEIO. We further strengthen our finding through a budget analysis of the SEIO salinity.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: Monitoring Freshwater Variability in Southwest Greenland Using Satellite and In-Situ Observations

Authors: Fabrice Bonjean, Gilles Reverdin, Jacqueline Boutin, Jean-Luc Vergely, Sébastien Guimbard, Nicolas Kolodziejczyk
Affiliations: CNRS, ACRI-ST, Oceanoscope, LOPS
Understanding the variability of Sea Surface Salinity (SSS) in the subpolar North Atlantic is critical for assessing freshwater transport and its role in the global climate system. This study focuses on the region south of Greenland, which is influenced by significant freshwater inputs from the Arctic and Greenland ice melt. Using SSS data from the Climate Change Initiative (CCI) alongside a comprehensive set of in-situ observations, we analyzed key events and variability in the area, exploring mechanisms driving the exchange between the shelf and the open ocean. The CCI SSS product, which integrates data from multiple satellite missions, effectively captures the seasonal and interannual variability of SSS beyond 50 km from the coast. Our analysis of a well-sampled freshwater event in fall 2021 highlights the capability of satellite SSS to track the transfer of fresh shelf waters into the open ocean. This "fresh blob" event, driven by strong northwesterly winds, resulted in the transport of freshwater from the East Greenland Current into the interior Labrador Sea. Weekly CCI SSS fields capture the westward progression of this anomaly, a feature corroborated by in-situ data from Argo floats and drifters. Despite these successes, challenges remain in coastal regions where biases in CCI SSS highlight the need for improved absolute calibration. Positive biases near the Greenland shelf have been linked to limitations in the previous mean calibration against the ISAS climatology, which struggled to capture small-scale variability near the coast. A recently updated ISAS climatology has now been utilized for the calibration of the latest CCI SSS version, and updated results addressing these issues will be presented. Nonetheless, these findings underscore the need for higher-resolution satellite sensors to resolve the finer-scale processes governing coastal freshwater transport. Building on these results, this study serves as a foundation for broader applications of satellite-derived SSS in monitoring high-latitude freshwater variability. Future efforts will extend the methodology to other regions in the Northern Hemisphere, applying a systematic approach to integrate satellite and in-situ observations for enhanced tracking of freshwater anomalies. Examples of these extensions will be presented, showcasing their potential to improve understanding of freshwater pathways and their influence on the subpolar gyre and global thermohaline circulation.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.49/0.50)

Presentation: CCI+SSS: Expanding Sea Surface Salinity Research to Meet Climate Challenges

Authors: Jacqueline Boutin, Dr. Nicolas Reul, Dr Rafael Catany, Dr Roberto Sabia
Affiliations: CNRS/LOCEAN, IFREMER, ARGANS, ESA
Sea Surface Salinity (SSS) is an increasingly used Essential Ocean and Climate Variable. The Soil Moisture Ocean Salinity (SMOS), Aquarius, and Soil Moisture Active-Passive (SMAP) satellite missions provide SSS measurements with very different instrumental features leading to specific measurement characteristics. The ESA funded Climate Change Initiative Salinity project (CCI+SSS) aims to produce SSS Climate Data Record based on those satellite measurements. The instrumental differences are carefully adjusted to generate a homogeneous Climate Data Record (CDR) [Boutin et al., 2021]. An optimal interpolation in the time domain without temporal relaxation to reference data or spatial smoothing is applied. This allows for preserving the original dataset's variability. CCI+SSS fields are well-suited for monitoring weekly to interannual signals at spatial scales ranging from 50 km to the basin scale. In this presentation, we review recent scientific findings from the CCI+SSS project team in recent years. We also detail the improvement features included in CCI+SSS version 5.5 dataset, which covers the 2010-2023 period and will be delivered at the end of 2024. Since CCI+SSS version 4, and according to users' recommendations, global SSS fields are provided on a rectangular 0.25°grid. Polar SSS fields on the EASE polar grid are also provided. Comparing with previous CCI+SSS versions, version 4 was noticeably better-quality at high latitude regions. The Climate Research Group used this dataset to show a relationship between the interannual SSS variability in the Barents Sea and the regional sea ice coverage. CCI+SSS enabled us to monitor the spatio-temporal evolution of a fresh event west of Greenland in the fall of 2021. In the tropics, the application of new RFI contamination correction (Bonjean et al. 2024) allowed the restoration of the interannual SSS variability related to ENSO, which was, in previous versions, masked by RFI contamination around the island of Samoa. The CCI+SSS fields have been used to assess model results at a global scale with or without data assimilation (GLORYS model) and in river plumes regions such as the Amazon plume (NEMO-PISCES biogeochemical model, Gévaudan et al. 2022) and the east tropical Atlantic Ocean (Thouvenin-Masson et al. 2024). A main uncertainty for simulating SSS interannual variability has been identified as coming from uncertainty on river discharges. CCI version 5 (2010-2023) uses SMOS SSS derived with dedicated reprocessing and the recent SMAP version 5.3 SSS to improve temporal stability. In regions contaminated by RFI, SMOS SSS variability is recovered using a methodology adapted from Bonjean et al. (2024). Systematic latitudinal-seasonal SSS corrections, as well as temperature and wind-related effects. This leads to significant improvements, especially at high latitudes. Participants to the CCI+SSS team are: J. Boutin1, N. Reul 2, R. Catany 3, A. Martin 4, J. Jouanno 5, L. Bertino 6, F. Rouffi 7, F. Bonjean1, G. Corato 8, M. Gévaudan2, S. Guimbard 9, P. Hudson4, N. Kolodziejcyk2, M. Martin10, X. Perrot, R. Raj6,E. Rémy11, G. Reverdin1, A. Supply 2,C. Thouvenin-Masson 1, J.L. Vergely 7, J. Vialard1, R. Sabia12, S. Mecklenburg12 (1)LOCEAN, (2)LOPS/IFREMER, (3)ARGANS, (4)NOC, (5) LEGOS, (6) NERSC,(7) ACRI-st, (8) ADWAISEO, (9)OCEANSCOPE, (10)METOFFICE, (11)MERCATOR OCEAN INTERNATIONAL, (12)ESA References Bonjean, et al. (2024), "Recovery of SMOS Salinity Variability in RFI-Contaminated Regions," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2024.3408049. Boutin, J., et al. (2021), Satellite-Based Sea Surface Salinity Designed for Ocean and Climate Studies, Journal of Geophysical Research: Oceans, 126(11), e2021JC017676, doi:https://doi.org/10.1029/2021JC017676. Gévaudan et al. (2022), Influence of the Amazon-Orinoco discharge interannual variability on the western tropical Atlantic salinity and temperature. Journal of Geophysical Research: Oceans, 127, e2022JC018495. https://doi.org/10.1029/2022JC018495. Thouvenin-Masson et al. (2024), Influence of river runoff and precipitation on the seasonal and interannual variability of sea surface salinity in the eastern North Tropical Atlantic, Ocean Sci., 20, 1547–1566, https://doi.org/10.5194/os-20-1547-2024.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Session: F.01.03 Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations - PART 2

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Tools in action: Tailoring user-friendly solutions for varied educational environments

Authors: Tobias Gehrig, Maike Petersen, Johannes Keller, Alexander Siegmund
Affiliations: Heidelberg University Of Education, Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University
Modern approaches to Earth Observation (EO) hold significant potential for enhancing our understanding of the Earth’s system in the context of the Sustainable Development Goals (SDGs). They provide a wealth of opportunities for climate and environmental education offering unique insights into the state and changes occurring at virtually any location on Earth. Additionally, these applications create numerous educational opportunities by linking Geography, STEAM education (Science, Technology, Engineering, Arts, Mathematics), and education for sustainable development (ESD). However, while EO data are accessible and visually compelling, their interpretation requires an understanding of complex technical, environmental, social, and ethical contexts (Ohl, 2013). Consequently, their implementation in education is often hindered by time constraints, a lack of expertise among teachers, and the absence of suitable teaching examples and applications for students to analyse EO data (Dannwolf et al., 2020). By addressing these barriers, the Institute for Geography and Geocommunication – rgeo at Heidelberg University of Education aims at bridging the gap between science and education, bringing EO approaches to classrooms. Therefore, rgeo develops and constantly optimizes tailormade digital tools which allow a user-friendly experience. Those tools are the basis for teaching with EO data and include a student-friendly web-based application to analyse EO-data, an adaptive e-learning-platform, and an app to combine EO and field work. Based on those applications we design teaching material, e-learning-modules, lesson plans, workshops and further trainings to ease implementation of this highly motivating and visually appealing methodology. While the main target group of projects conducted by rgeo are teacher trainees, students and students of secondary school level, it also increasingly aims at vocational trainees as well as experienced teachers. All these target groups have their specific challenges which need to be addressed and considered during project planning and implementation. This talk presents two approaches to implement EO in education. While the use of EO data in secondary schools is already common and encouraged by its inclusion in the curricula of several federal states in Germany, used methods are mostly confined to Google Earth/Maps to provide a first overview of a geographical phenomenon. Additional potentials such as the use of different time steps for a change detection, UAS or multispectral data largely remain untouched. However, using such methods from actual scientific projects can aid in conveying topics such as resource conflicts. However, the translation from a scientific project to a teaching example needs to manage a triple complexity of methodological, content-related, and ethical considerations. Students must first grasp the technical and physical foundations necessary for interpreting EO data (Keller et al., 2023). The subject matter should be aligned with key geographic concepts to help students acquire meaningful knowledge (Fögele, 2017). Furthermore, when addressing ethical issues, students need to learn how to articulate ethical questions clearly and appropriately (Barth, 2022). Thus, this presentation will use a teaching example focused on the causes and consequences of land use change in West Pokot (Kenya) to illustrate how to effectively navigate this triple complexity. A second challenge lies in the applicability of EO content for vocational training. While many trainees are likely to be confronted with EO data in their future occupations or would benefit from this methodology, EO is not part of most vocational training programmes. Implementing such approaches into vocational training is even more restricted by time constraints than it is the case for secondary schools. Most teachers in this educational field are unaware of the potentials of EO and need to be convinced of the benefits for their students. Thus, courses should be developed in co-design with the teachers to meet their specific needs and address their concerns. Courses are also more appealing if they are designed as project studies where students can use their own ideas and topics relevant for their specific occupations. Finally, the presentation will highlight key design principles and lessons learned that play a crucial role in the development and implementation of EO-based educational approaches. These principles provide valuable guidance for overcoming the challenges described above and enable the sustainable integration of EO data into different educational formats.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Teacher’s Training in the Projects Copernicus4schools and EUthMappers

Authors: Alberta Albertella, Lorenzo Amici, Jesus Rodrigo Cedeno Jimenez, Quang Huy Nguyen, Alberto Vavassori, Prof Maria Antonia Brovelli
Affiliations: Politecnico Di Milano
Over the last two years, in two different European projects, Politecnico di Milano has been involved in designing training activities for European secondary school teachers. The first project, Copernicus4Schools, is a FPCUP (Framework Partnership Agreement on Copernicus User Uptake) project involving several high schools in different countries, with the aim of stimulating students and teachers to use and better understand the Copernicus Programme and the possibilities offered by Earth Observation. It focuses in particular with regard to climate, climate change and their consequences, illustrating how satellite images are used to monitor our planet. Several European partners are involved in the project and, among them, Politecnico di Milano, has been in charge of developing the teaching materials for the teachers’ trainings. These were prepared in a web-book available in English and in the languages of the countries involved in the project. The 15-hours course introduces GIS and QGIS, focusing on satellite data analysis and emergency management. Participants will learn how to access and analyse Copernicus satellite imagery, retrieve and integrate datasets such as flood delineation maps, land cover data, and population information. Also they will learn how to use these tools in QGIS to evaluate the impact of flooding on land use and population, providing valuable insights for emergency response and planning. All topics are introduced theoretically and applied in an exercise on real data (from a flood event that took place in 2020 in Italy), described step by step in all its aspects. The second EUthMappers Project is an ERASMUS+ Project with the aim of increasing the interest of Secondary Schools pupils in STEM topics and to enhance their digital skills and their environmental and civic engagement. This will introduce them to the use of open-source geospatial tools finalised at the development of open collaborative and inclusive mapping projects based on OpenStreetMap platform (OSM). Initially, teachers from five schools in Italy, Spain, Slovakia, Romania and Portugal are introduced to OSM through a workshop and developed-in-project handbook. Then, they guide their students developing local mapping projects from ideation, creating an online mapping project on Tasking Manager, data acquisition and visualisation. Throughout this process, the pupils are trained to improve their teamwork abilities, think creatively and develop by their own a method of gathering data. In the final step, to expand the ability of pupils in five schools, they not only cooperate with their classmates, but also work together for one collaborative humanitarian project led by UN Mappers. The simultaneous mapping efforts by students enhance also their global collaboration and sensibility. Participants will acquire the skills and competencies needed to work together on an international scale and organise themselves within a group, ensuring they are equipped to manage future collaborative projects beyond the scope of the project.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: SatSchool: Observing the Earth from Space, in the Classroom

Authors: Leam Howe, Alex Lewis, Rebecca Wilks, Hannah Picton, Samuel Bancroft, Catherine Mercer, Laura Bentley, Maria Paula Velasquez, Yvonne Anderson, Emily Dowd, Bryony Freer, Calum Hoad, Morag
Affiliations: University Of Leeds
SatSchool is an outreach initiative aimed at engaging lower secondary school pupils (aged 11-14) with Earth observation (EO) science, while highlighting the relevance of STEM subjects and showcasing the diverse pathways into EO careers. Initially spearheaded by PhD students from the Satellite Data in Environmental Science Centre for Doctoral Training (SENSE CDT), SatSchool has evolved into a collaborative effort involving early career researchers from institutions across the UK, including the Universities of Edinburgh, Leeds, Stirling, Glasgow, the National Oceanography Centre, and the British Antarctic Survey. SatSchool offers the opportunity for PhD students to engage in outreach as part of a supportive network, alleviating the time constraints and stress associated with individual organisation of such activities. Supported by funding totalling £23,850 from sources including NERC, SENSE CDT, the Ogden Trust, and SAGES, SatSchool has already made a significant impact, having reached over 2000 students across 37 schools in Scotland and England, and thousands more engagements through festivals and online events. SatSchool’s outreach package contains six bespoke modules (Introduction to EO, Hands on with Data, Cryosphere, Biosphere, Atmosphere, and Oceans), which draw from the broad expertise and creativity of SENSE CDT students and have been enhanced by liaison with school teachers and the European Space Education and Resources Office UK (ESERO-UK). All resources are open-source, with modular exercises enabling educators and PhD demonstrators to flexibly create EO lessons. At LPS 2025, we will showcase our open-access outreach materials, present insights gained from the development of SatSchool, and outline our future objectives.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Space fuels learning jewels: Gaining spatial literacy through gamified learning with Earth Observation

Authors: Eva-Maria Steinbacher, Thomas Strasser, Isabella
Affiliations: Paris Lodron University of Salzburg
Space fuels learning jewels: Gaining spatial literacy through gamified learning with Earth Observation Satellite based earth observation (EO) offers a wide variety of application areas, such as effects of climate change, pollution, or loss of biodiversity, for monitoring spatio-temporal environmental phenomena. Many of these applications inherently address the pressing real-world challenges for the society, both current and in future. EO serves as a data source for historical and up-to-date information on environmental status and changes. Thus, is ideal as a learning framework for educators to teach young people on environmental changes, human impact and the consequences. This is of utmost importance, since children and adolescents from the age of about 8 to 18 years are the next cohort, who will shape the future by transforming their personal knowledge into action. In this context, the iDEAS:lab, lab for Science Communication at PLUS university, provides informal education to professionals. Emphasis is put on work within open, experience-based learning environments for integrating EO-based, gamified and experiential learning offers. These professionals operate in learning spaces that are more exploratory and less constrained, thus providing opportunities for interactive, hands-on learning experiences. In education emotional triggers are essential to deeply engage with a topic and foster behavioral change by critical reflection. Such triggers can arise from factors like spatial proximity, personal relevance, or topics that align with our interests. Important are subjects we can connect with and integrate meaningfully into our cognitive understanding by scaling from personal surroundings to the perspective on the world. The usage of EO in a learning environment needs basic skills for educators to interpret and contextualise the provided information. However, motivation for lesson integration is a critical asset. For the education and training of educators, this means providing inspiration and ideas that are easy to implement - both in terms of material preparation and the resources used. At the same time, the examples we introduce in the following, aim to introduce current societal and environmental topics in a fresh context: through the use of geospatial media, both analogue and digital, presented in an engaging manner that fosters personal relevance, identification, and interest. Spatio-temporal literacy is developed by shifting perspectives on familiar surroundings, enabling learners to explore and analyze their environment from new perspectives. Educators focus on fostering spatial literacy through three distinct contexts: children and adolescents examine familiar locations using satellite imagery, learning to identify distinguishing features and contrasting them with their well-known ground-level perspectives. The complexity of these topics is simplified to ensure that educators can easily apply or adapt the content using conventional media. Another key aspect of the training approach is the playful method of teaching, where learning happens implicitly and is often initiated by the children and adolescents themselves. This transforms learning into an experience - an enjoyable, exploratory, and playful journey. In the following, four sustainable games for education with EO are introduced. The places to be explored can either be guided, accompanied by intriguing stories or facts, or freely chosen by the children and adolescents. Popular options often include revisiting previous places of residence, exploring past or future vacation destinations, or other locations of personal significance. Focusing on specific locations, such as infrastructure or topographical features, can be effectively achieved through a brief "Space Travel" between points of interest. This approach becomes particularly engaging when implemented with a playful element like "Space Bingo": During a space travel, guided by the educators, participants can play a bingo game, identifying infrastructure elements such as railways, bus networks, industrial sites, or recreational facilities like stadiums, tennis courts, or swimming pools from a bird’s-eye perspective. By using a Bingo-card filled with infrastructure elements or symbols, the task is to identify these according elements on satellite images. Initial feedback on "Space Bingo," which can be easily and quickly conducted using applications with freely available tools for EO data investigation (e.g. virtual globes like Google Earth), has been overwhelmingly positive in the practical use of the work of educators. Both educators and participating children and adolescents have reported high levels of interest and enthusiasm for this interactive learning activity. The Satellite Image Matching Game – also adaptable as a memory game – pairs the views from a first-person perspective (in-situ images) and the earth view from space (satellite image maps). The game is easy designed around familiar, prominent locations or collaboratively created with children and adolescents. In the initial step, the in-situ images can be matched openly with satellite image maps, focusing on the identification and recognition of landmarks. A subsequent round of Satellite Image Memory then provides an opportunity to reinforce this knowledge at a higher level of difficulty. For teenagers, "Earth Observation - The Case Stories” offer a chance to delve deeper into specific topics. For instance, wildfires or floods can be analysed as small case studies using accessible, open-source tools like the EO Browser. In this exercise, personal connections play a key role in fostering engagement with these topics. Examples include relatives or acquaintances affected by wildfires during vacations or flooding events in the participants' own communities. Such personal relevance enhances identification with the subject matter and deepens understanding of its implications. In conclusion, satellite-based Earth observation provides a powerful and engaging tool for educators to teach young people about environmental and societal challenges, while offering immersive, hands-on learning experiences that foster spatio-temporal literacy. Using EO data in gamified approaches, educators can create appealing learning opportunities such as "Space Bingo" and "Satellite Image Memory,", where learning is both enjoyable and meaningful, and adaptable to interests and age-groups.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall M1/M2)

Presentation: Edusat Challenge: Empowering Classrooms with Satellite Earth Observation

Authors: Rosa Olivella, Laura Olivas
Affiliations: SIGTE-University of Girona
Within the framework of the Edusat project (https://www.edu-sat.com/?lang=en), designed as an innovative educational resource for learning about Earth observation through satellite imagery, the Edusat Challenge emerges as a transformative initiative. Its goal is to empower teachers and students to integrate these resources effectively into primary, secondary, and high school education, fostering engagement with global environmental phenomena and solutions. THE CONTEXT The Edusat project is grounded in the belief that understanding the Earth's dynamic processes is essential for addressing the challenges of global environmental change. By familiarizing students with the study of natural and human-induced phenomena, Edusat bridges the gap between theoretical knowledge and real-world application. The increasing availability of daily satellite imagery from around the world enables the identification and monitoring of critical phenomena such as forest fires, floods, glacier melting, deforestation, and urban expansion. These processes significantly impact the Earth's surface, providing invaluable learning opportunities. Edusat focuses on utilizing freely available satellite imagery, primarily from the European Space Agency's (ESA) Copernicus program and its Sentinel satellites, while also allowing for the integration of data from NASA’s Landsat program. With the knowledge and skills gained through Edusat, participants can envision objectives for new space missions. Edusat’s Key Objectives: 1. Promoting STEAM Education: Inspire curiosity and interest in Science, Technology, Engineering, Arts, and Mathematics (STEAM) among students by providing hands-on experiences with cutting-edge tools. 2. Raising Environmental Awareness: Equip students with the ability to identify and analyze the effects of global environmental changes, fostering responsible global citizenship. 3. Fostering Creativity and Collaboration: Encourage critical thinking, problem-solving, and teamwork among teachers and students through interdisciplinary projects. 4. Empowering Educators: Provide teachers with the training, resources, and confidence needed to incorporate Earth observation into their curricula, making advanced concepts accessible at all educational stages. THE EDUSAT CHALLENGE Building on the success and potential of the Edusat project, the Edusat Challenge has been developed to extend its reach and impact. This initiative focuses on training secondary and upper primary school teachers to bring satellite-based Earth Observation techniques into the classroom, with an emphasis on climate change and environmental impacts. A MULTI-DIMENSIONAL LEARNING EXPERIENCE The Edusat Challenge integrates several key elements: - Blended Learning: Teachers engage in a 40-hour training program combining online modules and in-person workshops. - Mentorship Support: Teachers receive continuous guidance to design and implement impactful activities tailored to their students' needs. - Interdisciplinary Focus: Activities bridge science, geography, technology, and environmental studies, providing a holistic learning experience. The ultimate aim is to enhance students' understanding of global issues while fostering a passion for scientific inquiry—particularly among underrepresented groups such as girls and students from disadvantaged backgrounds. PROGRAM TIMELINE AND PHASES The pilot program spans from November 2024 to May 2025, with a structured approach designed to ensure meaningful engagement and measurable outcomes. Phase 1: Presentation and Registration. Teachers are introduced to the program, its goals, and the resources available. Interested participants register to join the initiative. Phase 2: Training. Participants receive specialized training focused on the use of satellite imagery to study Earth observation and global environmental change. This training is designed to cover: - Understanding the principles of remote sensing. Participants will gain a foundational understanding of how remote sensing works, including the science behind satellite data and its applications. - Exploring case studies in Earth observation. Practical examples will demonstrate how remote sensing can be applied to monitor and analyze Earth systems, such as land use changes, deforestation, urban growth, and climate impacts. - Learning to analyze natural phenomena using satellite imagery. Participants will use tools like Copernicus Browser to explore and interpret satellite data, empowering them to investigate natural events such as floods, wildfires, and vegetation cycles. - Documenting and communicating findings through storytelling tools. They will learn to create engaging and informative narratives using storytelling platforms like ArcGIS StoryMaps, enabling them to share their findings effectively with diverse audiences. - Adapting activities for classroom implementation. Guidance will be provided on how to translate these skills into classroom activities, equipping educators to integrate Earth observation and environmental monitoring into their teaching. This phase equips educators with the technical and pedagogical tools necessary to bring these concepts into the classroom, fostering a deeper understanding of environmental issues and satellite technology among their students. Phase 3: Challenge Launch and Mentoring. Teachers lead hands-on activities in their classrooms, with mentorship provided to address challenges and optimize outcomes. Students explore environmental phenomena, analyze satellite data, and draw meaningful conclusions. Phase 4: Results Presentation and Feedback. In the final phase, participating teams present their findings. This collaborative session includes constructive feedback from peers and mentors, fostering a culture of shared learning and continuous improvement. PILOT IMPLEMENTATION The pilot phase will engage 34 teachers from 27 schools across Catalonia, representing a diverse range of educational contexts. - Training Phase: Training sessions will be held in November and December 2024, ensuring that all participants are equipped with the necessary skills to carry out activities in their classrooms. - Mentorship and Classroom Activities: From January to March 2025, teachers will implement activities in the classroom, supported by mentors to address challenges and ensure successful integration of the program. - Final Presentation and Evaluation: The program will culminate in May 2025 with a presentation of the results. This event will celebrate the achievements of students and teachers while providing a platform for exchanging insights and refining approaches for future iterations. COLLABORATION AND LONG-TERM VISION The final session will not only celebrate the program’s successes but also serve as a collaborative forum where participants share insights, challenges, and innovative approaches. This exchange will provide a foundation for continuous improvement and scalability in future editions of the Edusat Challenge. LEADERSHIP AND PARTNERS The Edusat Challenge is spearheaded by the Geographic Information Systems and Remote Sensing Service (SIGTE) of the University of Girona, under the umbrella of the NewSpace Educational Program, promoted by the Government of Catalonia (Secretariat for Digital Policies of the Department of Business and Labor and the STEAMcat program of the Department of Education and Professional Training), in collaboration with the Institute of Space Studies of Catalonia (IEEC), at the NewSpace Strategy of Catalonia. The NewSpace Strategy of Catalonia is coordinated by the Government of Catalonia (Secretariat for Digital Policies) in collaboration with the IEEC, the i2cat Foundation, and the Cartographic and Geological Institute of Catalonia, with the objective of creating a pole of innovation in the new economy of space, to bring economic growth to the country and to improve the citizens’ lives thanks to the solutions and benefits that this sector provides. By fostering innovation and collaboration, the Edusat Challenge aims to make advanced Earth observation tools accessible to classrooms, shaping a new generation of environmentally aware and scientifically skilled global citizens.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall E1)

Session: C.03.07 The Copernicus Sentinel Expansion missions development: status and challenges - PART 1

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


CO2 Monitoring Mission Overview


  • Valerie Fernandez
  • Yannig Durand

CO2 Monitoring Mission: The Ground Segment architecture


  • Angela Birtwhistle
  • Daniela Taubert
  • Cosimo Putignano

CHIME Mission and Project Status


  • Jens Nieke
  • Marco Celesti

CHIME: Satellite, Instrument and Performances


  • Laurent Despoisse
  • Heidrun Weber

LSTM mission and project status


  • Ana Bolea
  • Miguel Such
  • Benjamin Koetz

LSTM L1 and L2 products and Algorithms


  • Itziar Barat
  • Steffen Dransfeld
  • Ignacio Fernandez Nunez
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Session: A.07.08 Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling - PART 1

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Implementing the three-source energy balance model with Copernicus-based inputs for improved evapotranspiration modeling over savanna ecosystems

Authors: Vicente Burchard-Levine, Héctor Nieto, M.Pilar Martín, Benjamin Mary, M.Dolore Raya-Sereno, Miguel Herrezuelo, Arnaud Carrara
Affiliations: Institute of Agricultural Sciences (ICA), Spanish National Research Council (CSIC), Environmental Remote Sensing and Spectroscopy Laboratory (SpecLab), Spanish National Research Council (CSIC), Fundación Centro de Estudios Ambientales del Mediterráneo (CEAM)
Accurate evapotranspiration (ET) estimates are key to better understand ecosystem function, manage terrestrial water resources and provide early indicators for drought events. Recent ESA projects such as SenET and ET4FAO have made great strides in improving operational ET modeling by merging shortwave and thermal infrared (TIR) imagery from Sentinel-2 and 3. With a current lack of operational high spatial resolution TIR sensor (<100m) with frequent revisit time (< 1 week), SenET proposed a data mining approach to sharpen Sentinel-3 LST (1km) to 20m using Sentinel-2’s spectral bands. These ET algorithms implement the TIR-based two-source energy balance (TSEB) model, demonstrating to provide robust ET retrievals at reasonable accuracy across a range of ecosystems and conditions. However, savannas or tree-grass ecosystems (TGEs), composed of a clumped and open tree canopy superimposing an herbaceous understory, have inherent structural and phenological complexities, which has shown to contribute to increased model uncertainties when applying conventional remote sensing approaches. In light of this, the three-source energy balance (3SEB) model, an adaptation of TSEB, was proposed to better characterize the multiple vegetation layers present in TGEs. 3SEB adds an additional vegetation source to TSEB allowing to directly incorporate the distinct structural and phenological traits of the two co-existing plant functional types. 3SEB was previously evaluated using tower-based inputs across a range of flux sites along with inputs stemming from geostationary satellites (i.e. 0.05° MSG-SEVIRI). The main objective of this study was to assess the performance of 3SEB when forced at the medium to high spatial resolution (20-300 m) using the Sentinel constellation as similarly applied within the SenET/ET4FAO context. Model performance was evaluated at both sharpened 20 m and 300 m spatial scales using Sentinel imagery along with Landsat images (100 m) over a range of TGE eddy-covariance (EC) sites acquired from FLUXNET, ICOS, AmeriFlux and OzFlux. Preliminary results in Spanish TGE sites showed robust estimates from 3SEB with the root-mean-square-errors (RMSEs) of modelled sensible and latent heat fluxes ranging between 70-80 W m-2, showing no significant differences in accuracy when using sharpened Sentinel or Landsat inputs. These results highlight the potential to apply 3SEB operationally at high spatial resolution with Copernicus data to improve our understanding of these complex but highly valuable ecosystems, especially in regards to the effects of global change and increased drought frequencies.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Advancing large-scale, high-resolution modelling of the water cycle

Authors: Peter Salamon, Stefania Grimaldi, Cinzia Mazzetti, Christel Prudhomme, Carlo Russo, Ervin Zsoter, Jesus Casado Rodriguez, Corentin Carton de Wiart, Juliana Disperati, Nikolaos Mastrantonas, Mohamed Azhar, Goncalo Gomes, Christoph Schweim, Tim Sperzel, Carina-Denise Lemke, Markus Ziese, Alejandro Serratosa, Tomas Jacobson, Francesca Moschini, Berny Bisselink, Davide Bavera, Andrea Ficchì, Marco Radke-Fretz, Antonio
Affiliations: European Commission Joint Research Center
Hydrological models are essential tools for assessing the water cycle. They provide relevant information to decision makers for floods, droughts, and water resource management and enable the analysis of scenarios on how a hydrological system might behave under varying natural and anthropogenic constraints. One example is the open-source hydrological model OS-LISFLOOD that is used to generate flood forecasts and drought indicators for the European and Global Flood Awareness Systems (EFAS & GloFAS) as well as the European and Global Drought Observatories (EDO & GDO) of the Copernicus Emergency Management Service (CEMS). OS-LISFLOOD is a distributed, physically based rainfall-runoff model able to represent all the main hydrological processes. It needs as input meteorological forcings as well as surface fields encompassing (i) catchment morphology and river networks, (ii) land use, (iii) vegetation cover type and properties, (iv) soil properties, (v) lake and reservoir information, and (vi) water demand. Like other hydrological models, it requires calibration using river discharge observations for a set of parameters to adjust model behavior to reflect specific climatic and physiographic conditions. Being used in an operational service, the hydrological model and its European and global model domain set-up benefit from regular upgrades, with major changes in the hydrological modelling chain introduced as ‘version releases’. In its current operational version, the global model set-up (GloFAS v4.x) uses a spatial resolution of 3 arcminutes (~5.4 km) and a daily time step, whereas the European model set-up (EFAS v5.x) uses a spatial resolution of 1 arcminute (~1.8 km) and a 6-hourly time step. Both set-ups have been calibrated using discharge observations at gauging stations (1995 in GloFAS and 1903 in EFAS). In ungauged catchments where no discharge observations were available, model parameters were regionalized using climatic similarity and geographical proximity as criterion. Both set-ups are used to provide a hydrological reanalysis as well as hydrological predictions spanning different time ranges from short-term and medium range predictions to monthly and seasonal outlooks. A Wiki page is available for users providing detailed information about the version release including model set up and skill performance (EFAS – GloFAS). In addition, an extensive model documentation, user guide, and test catchments for OS LISFLOOD are available on the OS LISFLOOD webpage. A specific feature of the European and global model set up of OS LISFLOOD is that not only the model and associated tools for pre-/post-processing, calibration, etc. are open-source, but also the required input and calibrated parameter maps are freely accessible. This allows users to benefit from the latest developments and innovations and, more importantly, it enables a wider community in contributing to further extending and improving the model and its set-up. In this presentation we describe the next major evolution of OS LISFLOOD and its set-up for the European (EFAS v6.x) and global domain (GloFAS v5.x). The main foreseen changes can be grouped into three categories: 1.) model input; 2.) model improvements; and 3.) calibration and regionalization. The main changes in the model input concern the meteorological forcings. For the European domain, the meteorological forcings benefit from an increased number of meteorological observations, improved quality control, and a modified interpolation method. In the global model domain, enhancements include a correction of spurious rainfall and a modified downscaling of ERA-5 meteorological variables. Furthermore, changes in the surface fields related to soil properties, lakes and reservoirs as well as water demand for anthropogenic use integrating the latest available datasets have been included. Hydrological model advancements focus on river routing, in particular for mild sloping rivers, and a modified reservoir routine. Furthermore, the model state initialization has been enhanced and a new modelling routine called transmission loss, which accounts for transpiration by macrophytes and riparian vegetation as well as groundwater recharge through river channels, has been added. For model calibration and regionalization, it is foreseen to increase the number of calibration stations, improve the overall performance of the objective function along the whole flow duration curve, add more hydrological performance statistics (e.g. from the Budyko framework), and to utilize the power of deep learning in the calibration of process-based hydrological models. It is expected that all those changes together contribute to a further, significant improvement in modelling the water cycle using OS-LISFLOOD at the European and global scale. In line with the current version, the improved model and its new set-up will be freely available. The hydrological model reanalysis and predictions of the upgraded set-ups will be made available on the CEMS Early Warning Data Store. Its release as part of the floods and drought prediction and monitoring systems (EFAS, GloFAS, EDO, GDO) of CEMS is foreseen during 2025.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: A new approach to retrieve evapotranspiration of crops from solar-induced fluorescence and hyperspectral reflectance data

Authors: Dr Bastian Siegmann, Prikaziuk Egor, Oscar Hartogensis, Mary Rose Mangan, Jim Buffat, Joaquim Bellvert, Julie Krämer, Juan Quiros Vargas, Juliane Bendig, Patrick Rademske, Uwe Rascher, Christiaan van der Tol
Affiliations: Forschungszentrum Jülich, University of Twente, Wageningen University and Research, Institute of Agrifood Research and Technology
Keywords: Latent heat flux, solar-induced fluorescence, evapotranspiration, hyperspectral remote sensing, airborne data, SCOPE, machine-learning regression, emulation Challenge The increase in extreme weather events has a strong impact on the exchange of water and energy in agricultural ecosystems. Evapotranspiration (ET), as a key hydrological variable, is an important component of the energy, water and carbon cycles and provides important information for predicting and monitoring drought events. In recent decades, methods for determining ET have been combined with various types of Earth observation data to create spatial ET estimates. While most of the available approaches are based on optical reflectance and thermal remote sensing (RS) data, the use of solar-induced fluorescence (SIF) as an additional data source for ET estimation is still an under-explored field. SIF is directly emitted from the core of the photosynthetic machinery of plants and can therefore be used as a proxy of photosynthesis (PS). Since transpiration and PS are coupled processes, SIF remote sensing data provide important information for improving ET estimates. With the launch of ESA’s Earth Explorer satellite mission FLEX in 2026, which will provide high-quality SIF data from space, now is the right time to further investigate how RS SIF data can contribute to estimate ET. In this contribution, we want to present a new approach that uses a combined radiative transfer, photosynthesis and energy fluxes model to determine ET from airborne SIF and reflectance data. The achieved results are finally compared to ET estimates derived from eddy-covariance (EC) data and corresponding ‘Sentinels for Evapotranspiration’ (Sen-ET) products derived from Sentinel-2 and 3 data. Methodology A time-series consisting of seven airborne SIF and reflectance data sets covering an agricultural area in the northeast of Spain was recorded by the FLEX airborne demonstrator HyPlant during the LIAISE project field campaign between 15 and 27 July 2021. The Soil-Canopy-Observation of Photosynthesis and Energy fluxes (SCOPE) model was used to derive ET expressed as latent heat from the airborne SIF and reflectance data. First, 3,000 reflectance simulations were generated with the combined leaf and canopy radiative transfer model (RTM) implemented as one module in SCOPE. Subsequently, a hybrid inversion scheme was applied that combines the SCOPE simulations with support vector regression (SVR) to retrieve biophysical leaf and canopy parameters from the airborne reflectance data. The inverted parameters for each pixel and metrological data from a weather station were then used to run the full SCOPE model in forward mode for a single alfalfa field consisting of 54,000 pixels equipped with an EC station to produce spatial estimates of latent heat (LESCOPE). For each pixel, numerous simulations were made using different input parameter combinations in the leaf biochemistry module of SCOPE to estimate LE. In the end, LE for each pixel was selected from the simulation for which the corresponding simulated gross primary productivity (GPP) and SIF values showed the best fits with measured GPP from the EC station and SIF retrieved from the HyPlant airborne image data, respectively. To determine the best fit for each pixel we used a cost function based on the Levenberg-Marquardt algorithm. Since running SCOPE on pixel-based is very time-consuming, in a second step, we built a SCOPE emulator using a Gaussian process regression (GPR) model trained with 10,0000 SCOPE simulations to predict LE of all alfalfa fields covered by the airborne image data. In that respect, an emulator can be regarded as a statistical learning model that mimics the input-output relationships of SCOPE. Once the emulator was trained, it was applied to the airborne image data and this allowed us to produce LE maps of all alfalfa fields within the study site (LEEMU) in less than ten minutes. Both LE estimates (LESCOPE, LEEMU) were finally compared to LE derived from instantaneous flux measurements of the EC station located in the investigated alfalfa field derived at the time of the aircraft overflights (LEEC). Furthermore, we converted and up-scaled the instantaneous airborne LE maps to daily ET maps and compared them to the corresponding Sen-ET products. Results The inversion of the SCOPE model in the first step led to a good match between the spatial estimates of leaf area index (LAI) and leaf chlorophyll content (LCC) with field measurements of the same parameters collected from the investigated alfalfa field (LAI: R2 = 0.86, RMSE = 0.62 m2 m-2∷, LCC: R2 = 0.69, RMSE = 11.37 µg cm-2). Furthermore, the comparison of the averaged simulated and measured reflectance of the field provided a high level of agreement (RMSE = 0.0222). The two spatial LE predictions from HyPlant compared to the LE reference data derived from the EC station also resulted in a good agreement. The LESCOPE model provides a high R2 (0.87) and a relatively low RMSE (67.58 W m-²), but the derived latent heat fluxes from the airborne data, especially for the later observations (22-27 July), are overestimated compared to the LEEC estimates. Although the LEEMU model is characterized by a slightly lower R2 (0.84), the RMSE is lower (33.76 W m-2), and the slope of the regression model is closer to 1 compared to the LESCOPE model. The comparison of the converted and up-scaled LESCOPE fluxes to daily ETSCOPE values with the corresponding Sen-ET product also resulted in a moderate level of agreement (R2 = 0.74, RMSE = 0.67 mm day-1). Outlook for the future The results of this study illustrate that additional RS-based SIF information can complement conventional RS data to possibly improve spatial estimates of LE/ET from SIF-measuring satellites in the near future. This is especially important for the early detection of drought events to develop adapted irrigation strategies in agriculture. In future research, the presented approach will be transferred to other crops and different climatic regions to investigate the full potential of the developed emulator. In addition, the development of a SIF-based LE/ET product could be of great interest for the FLEX satellite mission, as ESA will only deliver products up to level 2 and encourages the scientific community to develop innovative level 3 and 4 products. Although further research is needed, we are convinced that the presented approach has the potential to become such an innovative level 3 product derived from FLEX satellite data.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: Diurnal Asymmetry Analysis Combining Energy-Water Balance Models and Geostationary Land Surface Temperature Data

Authors: Pedro Torralbo, Christian Bossung, Philippe Pinheiro, Kaniska Mallick, Chiara Corbari
Affiliations: DICA, Politecnico di Milano - POLIMI, Remote Sensing & Natural Resources Modeling, Department ERIN, Luxembourg Institute of Science and Technology, Geocomputing Research, Department ERIN, Luxembourg Institute of Science and Technology
Accurate evapotranspiration (ET) data is essential for global water management and has recently been recognized as an Essential Climate Variable (ECV). To monitor ET, Satellite-based operational models are commonly used, relying on instantaneous land surface temperature (LST), which are limited to clear-sky days. Moreover, these models rely solely on daily data and fail to capture the full dynamics of ET throughout the day. This incomplete representation is directly linked to the fact that ET dynamics often exhibit an asymmetry between radiation and evaporation, a phenomenon more pronounced in arid regions, where it is influenced by a complex interplay of environmental factors such as air temperature, vapor pressure deficit or net radiation, and biophysical variables such as vegetation biophysical conductances. Interpreting this asymmetry requires models capable of representing ET dynamics without dependence on LST data and cloud conditions. This study presents the methodology and preliminary results of the ESA-funded UNITE project, which aims to address the limitations of estimating evapotranspiration (ET) and land surface temperature (LST) under cloudy conditions. The proposed approach integrates physical water-energy balance modeling at daily-hourly scales with geostationary satellite LST data from MSG, improving the interpretation of energy balances and daily dynamics. The study integrates data across an aridity gradient and ecological transect from northern Europe to southern Africa. Two models were applied: the analytical Surface Temperature Initiated Closure (STIC) model (Mallick et al., 2018, 2024), which is based on the Penman-Monteith and Shuttleworth-Wallace formulations, and the prognostic FEST-EWB energy-water-balance (EWB) model. The FEST-EWB model continuously simulates soil moisture and ET over time and space, resolving LST by ensuring the closure of the energy-water balance equations (Corbari et al., 2011). The study analyzed the differences and similarities in ET estimates from both models across regions at eddy covariance sites with varying aridity indices during the 2019-2023 study period, aiming to validate model performance under different climate conditions. The results not only address the challenges of estimating cloudy-sky ET and LST but also offer relevant insights into the variations in diurnal hysteresis between evaporation and plant responses to daily water stress across diverse climates. These findings, spanning humid to arid regions, will contribute to the development of advanced ET products for improved agricultural water management.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.15/1.16)

Presentation: The Global Atmospheric River Network: A Complex Network Approach to Global Moisture Transport Dynamics

Authors: Dr. Tobias Braun, Sara M. Vallejo-Bernal, Prof. Sebastian Sippel, Prof. Miguel D. Mahecha
Affiliations: University Leipzig, Potsdam Institute for Climate Impact Research
As the global water cycle intensifies, the frequency and severity of hydrological extremes, such as heavy precipitation events, are increasing. This poses profound challenges to terrestrial ecosystems and human systems. Atmospheric rivers (ARs) – narrow corridors of enhanced vapor transport in the lower troposphere – are a key driver of these extremes. In extratropical regions, ARs are the main moisture transport mechanism, accounting for more than 90% of water vapor transported towards the poles. While previous research has significantly advanced our understanding of their role for the global water cycle, the transport patterns of ARs at global scale as well as their land-surface impacts remain underexplored. In this talk, I will present current progress on the ‘Living Planet Fellowship’-funded ARNETLAB project, which leverages innovative methods from complexity science to disentangle the interplay between atmospheric dynamics and land surface processes. In analogy to terrestrial river networks, the pathways that ARs follow through the Earth’s atmosphere can be effectively represented by a transport network. Generally, the paradigm of complex networks encodes interactions between the units of a system through interlinked nodes. Recent applications illustrate that complex networks have provided novel insights into climate teleconnection patterns, synchronization of extremes and vegetation-atmosphere feedbacks. We draw on the vast array of existing methods from complex network theory to reveal the “global atmospheric river network”. It is defined on a hexagonal grid to avoid distortions due to the Earth’s spherical geometry. Multiple AR catalogs can be integrated seamlessly. Using effective measures of node and edge centrality, we reconstruct the global transport infrastructure of ARs, including prominent pathways, basins, and scale-dependent regional clusters of AR dynamics. To assess the significance of our findings, we simulate ensembles of random walkers diffusing along the AR network’s edges. This approach allows us to create a hierarchy of effective null models and to define network measures that are tailored to detecting more intricate regions that are vital for AR transport. Our preliminary findings highlight regions where AR dynamics could be less predictable, showcase how climate oscillations control AR network topology, and unveil how the AR network is evolving in a changing climate. They underscore the potential of complexity science to advance our understanding of ARs as critical components of the integrated human-Earth system. In a next step, the global atmospheric river network formalism enables us to study AR-driven moisture and heat transport networks. To this end, we systematically aggregate AR moisture and heat transport budgets along their most frequented routes. This holds particular relevance for ARs reaching the poles: here, the triggered precipitation as well as the released sensible and latent heat fluxes can exacerbate glacier melt and slow down Arctic sea ice recovery. It furthermore informs on which ARs feed hydrological and heat extremes. I will close my talk with these first advances towards AR-driven moisture and heat transport networks. These will finally help us to link the developed network framework to Earth’s land ecosystem variables, given by the full range of remote-sensing derived suite of ESA land-surface data as they are curated in the Earth System Data Lab. Overall, this talk situates ARs within the broader context of global water cycle dynamics and highlights their coupling with terrestrial and energy cycles, offering novel perspectives on the interplay between atmospheric dynamics and land surface processes.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 5

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Sen4Stat for leveraging the use of Earth Observation data for improved agricultural statistics: outcomes and lessons learned from 2 years of demonstration across the world

Authors: Sophie Bontemps, Pierre Defourny, Boris Norgaard, Cosmin Cara, Pierre Houdmont, Laurentiu Nicola, Cosmin Udroiu, Zoltan Szantoi
Affiliations: UCLouvain-Geomatics, CS GROUP - ROMANIA, ESA-ESRIN
Over the last decade, food security has become one of the world’s greatest challenges. By 2050, the world’s population will be 34 percent higher than today, and this massive increase will mainly affect developing countries and increase the food demand. Reliable, robust and timely information on food production, agricultural practices and natural resources is required. Since years, the potential of satellite Earth Observation (EO) for agricultural statistics has been recognized but it has not yet led to the adoption of this technology by National Statistical Offices (NSOs). The open-source Sentinels for Agricultural Statistics (Sen4Stat) toolbox aims at facilitating the uptake of Sentinel EO-derived information in the official processes of NSOs, since the early stages of the agricultural surveys to the production of the statistics. It automatically ingests and processes Sentinel-1 and Sentinel-2 time series in a seamless way for operational crop mapping and yield modelling, using ground data provided by national statistical surveys. It then integrates these EO products with the survey dataset to improve the statistics. Different types of improvements are targeted by the system: (i) reduction of the amplitude of the estimates confidence interval, (ii) disaggregation of the reprensentativity level to smaller administrative units, (iii) provision of timely crop area and yield estimators, (iv) optimization of sampling design by leveraging maps to build or update sampling master frames. The system has been tested and demonstrated in various countries around the world, thus addressing a wide diversity of both cropping systems and agricultural data collection protocols. In Spain, in-situ data come from the ESYRCE database, which is an integrated list and area frame survey, including square segments (700m - 250m) divided in agricultural plots. A national crop type map was generated based on Sentinel-2 dataset used by a random forest algorithm. F1 Scores of the main crop type classes were most often higher than 0.8. These maps were then coupled with the ESYRCE crop data and allowed significantly reducing the crop acreage estimates uncertainty. As an example, the barley acreage estimate based on ESYRCE survey only in Castilla y Léon region is 980.081 hectares, with a 95% confidence interval of +/- 56.644 hectares. When using both the survey and the EO-map, the barley estimate is in the same order of magnitude (923.026 hectares) but with a significantly smaller confidence interval (+/- 23.663 hectares). The coupling with EO data also enabled the spatial disaggregation of the acreage statistics to the municipality-level, which was not possible using only the ESYRCE data due to the lack of samples for obtaining accurate estimates. Similarly, we were able to demonstrate that estimating yield on a larger sample of data (i.e. with EO data) can improve the confidence in aggregate statistics by virtually increasing the number of data points collected in the survey. Finally, a map of irrigation was also produced at national scale in order to support an update of the sampling master frame by the NSO. In Senegal, the Agricultural Annual Survey (AAS) is a list frame survey, and parcels are identified by geolocalized points. We worked during two successive years with the NSO to make the survey protocol more compatible with EO data and for instance, registering parcels boundaries. These adjustments, implemented over a regional extent, allowed generating a crop type map with good accuracy for the main crops, and deriving acreage estimates with reduced error. The Sen4Stat system was also demonstrated in the Sindh province, in Pakistan, with the support of the World Bank (WB). The focus was on the irrigated wheat during the winter season and on the main summer crops. The demonstration started with the design of an area sampling frame and the implementation of the survey including the quality control of the collected data. Seasonal crop type maps were generated and acreage estimates were computed. The same decrease in the confidence interval amplitude was observed for both seasons. FAO also supported the uptake of the tool in different countries, mainly in Africa. Depending on the country, the focus was put on the survey protocol adjustment or on the statistics estimates. All these demonstrations have confirmed the high potential of EO data to get improved statistics. In all countries, the integration of EO data allowed significantly reducing the confidence interval around the estimates. Spatial disaggregation and timeliness gained with EO data were also successfully demonstrated. The demonstrations have also highlighted the importance to have in situ data compatible with EO data. Extensive work has been done with NSOs to evaluate their protocols and test adjustments to allow the integration of EO data. Clearly the Sen4Stat system can meet the requirements of reliable, robust and timely information needed to strengthen food security. Nevertheless, the adoption of such new technologies by the NSOs or other national stakeholders necessitates mid-term perspectives, in order to work step by step. In that context, the support received from international funders like FAO, CIMMYT, World Bank and Development Banks open new possibilities for a wide and impactful Sen4Stat uptake.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: From JECAM site to the region – Vegetation Conditions analysis using Sentinel-2, Sentinel-1, ECOSTRESS, and Copernicus Land Monitoring Service (CLMS) data for yield prediction through AI Applications

Authors: Prof Katarzyna Dabrowska - Zielinska, Msc Konrad Wróblewski, PhD Ewa Panek - Chwastyk, PhD Maciej Bartold, PhD Sandhi
Affiliations: Institute Of Geodesy And Cartography
Satellite data will be integrated with AI systems to enhance the classification of different vegetation types, monitor crop growth stages, and forecast yields. With frequent satellite passes, farmers receive regular updates on field conditions, enabling them to optimize management practices throughout the growing season. Data integration will include meteorological data and regional variations using ERA5. Climate prediction models, based on 20 years of historical data and utilizing Random Forest techniques, will be developed. This will allow for comprehensive climate change analysis, examining changes in agricultural structures, such as variations in crop fields over time. Additionally, soil moisture dynamics will be analyzed using ECOSTRESS data and a soil moisture model developed at the Institute, integrating Sentinel-1 imagery and crop classification information. This approach demonstrates the potential of Sentinel-2, Sentinel-1, and ECOSTRESS satellite data, combined with Copernicus Land Monitoring Service (CLMS) products like the Leaf Area Index (LAI), to assess biomass variability and predict yields. Different vegetation indices derived from satellite data will be examined. The study takes environmental variables into account to accurately predict yields for different crops. The results will be validated with reference data collected in the field, including biomass measurements and harvest dates. Analysis of seasonal variation in LAI revealed significant differences in crop growth dynamics, allowing key stress periods to be identified and potential yield losses to be estimated. Predictive models, integrated with satellite data, achieved high accuracy, demonstrating the effectiveness of remote monitoring in precision agriculture. This study aligns with the goals of the GEOGLAM initiative by showcasing how advanced remote sensing, AI, and environmental modeling enhance global agricultural monitoring, precision farming, and resource management while addressing climate change impacts on agriculture.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Estimation of Key Crop Traits from spaceborne Hyperspectral imageries with Neural Network Models: investigating the impact of ground and synthetic training dataset

Authors: Lorenzo Parigi, Gabriele Candiani, Luca Miazzo, Mirco Boschetti
Affiliations: Institute for Electromagnetic Sensing of the Environment, National Research Council, Department of Civil, Constructional and Environmental Engineering, Sapienza University of Rome, University of Milan
Nitrogen fertilisation is a crucial element in maintaining crop productivity. However, it also represents a significant source of water pollution and contributes to the formation of greenhouse gases. It is therefore essential to reduce the quantity of nitrogen distributed on agricultural land. The reduction and rationalisation of fertilisation is the scope of sustainable agriculture aligned with the assertion "produce more with less" as included in the European Farm-to-Fork strategy. Satellite images are a valuable tool that can be used to generate spatial explicit information to assess variability of crop status within fields. These maps can then be used to inform on the crop development and nutritional status as the basic for a more rational approach to determining crop need hence the distribution of fertiliser. In the last years, two scientific hyperspectral satellites sensors have been launched (ASI-PRISMA and DLR-EnMap) as a precursor of the new generation of operational missions Copernicus-CHIME and NASA-SBG that will provide a wall-to-wall mapping solution for the assessment crop condition and agricultural productivity. Hyperspectral data are rich in information, allowing for detailed exploration of the spectral signature of the crop enhancing the quantitatively estimation of plant biophysical parameters (biopar). The high number of bands is beneficial for improving estimation; however, there are issues regarding the high collinearity among the bands and the non-linear relationship between biopars and measured remote sensed spectra. In this framework, the objective of this work is do develop retrieval solutions able to handle such limitation and fully exploit hyperspectral spaceborne data to estimates biopar as fundamental input to support rational and smart farming applications. In this context, we opted to utilise artificial neural networks (ANNs) for their capability to model complex scenarios, handle regressor redundancy and noise in the data. The use of ANNs also presents certain challenges, including the necessity to utilise a large and diverse dataset for the training phase that for crop traits estimation imply the acquisition of ground data contemporary to satellite acquisition. In order to overcome these limitations, the use of synthetic data, generated by vegetation radiative transfer model (RTM), as training have been proposed as a feasible solution. The objective of this study is to evaluate the performance of various data set scenarios for ANN training. To this end, a data-driven and hybrid approach is employed to develop ANN models using real and synthetic data, respectively, testing different model architectures. The data employed in this study were pairs of spectra and biopars, classified according to the origin of the spectra as i) field (GRD), ii) satellite (SAT), and iii) synthetic (HYB) datasets. The biopars of interest are the Leaf Area Index (LAI), Canopy Chlorophyll Content (CCC) and Canopy Nitrogen Content (CNC). The field dataset comprises wheat data and it is composed of spectra acquired with a hand-held spectrometer and biopars collected in the ground on different location in Italy over a two-year period (2022-2023) for a total of 200 samples for LAI and 100 samples for CCC and CNC. The satellite data set includes ground biopar measurements acquired at satellite scale and collected at the same time of PRISMA overpass on different crops (multi-crop, 2020-2024) consists of approximately 200 samples for all biopars, this dataset is used for model training (2020 -2021), validation (2021 -2022) and test (2023 -2024). The synthetic dataset is generated by the PROSAIL-PRO providing different combination of input parameters biologically constrain to simulate 50000 samples. The aforementioned datasets were used to train three separately ANN models, validated and tested on the PRISMA dataset. The preliminary tests yielded some interesting results. The SAT models, trained on PRISMA, yielded satisfactory results when tested on independent multi-crop dataset. The GRD model, trained on field spectra, demonstrated good performance on PRISMA spectra when tested on wheat (2023-2024), exhibiting a relative Root Mean Squared Error (rRMSE) of 15, 14, and 20% and a coefficient of determination (R²) of 0.75, 0.7, and 0.65 for LAI, CCC, and CNC, respectively. The HYB model, trained on synthetic spectra, yielded the most favourable outcomes on the other crops (rice and corn) when tested on PRISMA data, with an rRMSE of 12 and 12% and an R2 of 0.65 and 0.85 for LAI and CCC, respectively (CNC data were not available for those crops). The favourable outcomes achieved with the GRD model indicate that it is feasible to utilise field spectra in training ANN developing models suitable when applied to satellite data. However, the models may exhibit limited transferability to other crops, as evidenced by their lower performance on rice and corn. Nevertheless, these models can be valuable for single-crop prediction. Conversely, the HYB-ANN solution demonstrated robust performance suggesting the potential for a more transferable model in a multi-crop scenario. Finally, a proof of concept of the utility of the ANN estimated CNC maps is proposed for the generation of wheat nitrogen fertilisation maps. Actual nitrogen uptake from CNC maps is used together with crop model scenario according to soil properties and weather data to assess crop needs.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Are Radiometric Landscapes Mirrors of Agrarian Systems?

Authors: Aurelle Sedegnan, Hervé Sossou, Simon Madec, Nestor AHOYO ADJOVI, Agnes Begue
Affiliations: Cirad, University of Montpellier, INRAB
In Benin, identifying agro-ecological zones and agricultural development poles is crucial for implementing effective agricultural policies. However, current large-scale zoning methods for agrarian systems rely on heterogeneous data sources and often involve subjective selection of socio-economic and environmental variables. These approaches face challenges in representativeness and reproducibility, limiting their utility for policy and planning. To overcome these limitations, we propose a novel approach grounded in the principle that landscapes - as a reflection of the interplay between biophysical and human factors - can serve as proxies for land use and agricultural practices in rural areas. This makes landscape zoning a viable tool for approximating agrarian system zoning. Recognizing that traditional landscape mapping relies on extensive, multi-scale data with varying degrees of accuracy, we introduce an innovative method called radiometric landscape mapping (Lemettais et al., 2024). This approach derives landscapes exclusively from remote sensing data, bypassing the need for measured variables (e.g., climate data) or interpreted products (e.g., land cover maps). It offers a statistically robust, scalable, and cost-effective solution that is applicable across different locations and scales. Data and Methods Radiometric landscapes were calculated using the first principal components of a series of MODIS NDVI (Normalized Difference Vegetation Index) images from 2018 to 2022. These analyses resulted in the identification of 36 homogeneous radiometric landscapes, which were subsequently classified into nine broader radiometric zones. For comparative analysis, we utilized data from the 2017 « Typologie des exploitant(e)s des sites de recherche et développement du Bénin » survey (Sossou et al., 2019), which covered 477 villages. This survey collected data on agricultural households, focusing on socio-economic characteristics (e.g., household composition, assets) and agricultural practices (e.g., crop types, mechanization, irrigation). The analysis identified three primary farming system types: irrigated systems, mechanized systems, and intensive systems relying on chemical inputs. Results and Implications The comparison of the agrarian system types distribution with radiometric zoning revealed strong alignment in terms of land cover composition and an intensification of the agriculture. Radiometric zones effectively discriminated between land cover types and provided a robust framework for analyzing agrarian systems. Conclusion These findings highlight the potential of radiometric zoning to redefine zoning frameworks for agricultural and land-use planning policies. By offering a replicable, data-driven, and scalable approach, radiometric landscapes present a promising tool for supporting sustainable agricultural development in Benin and beyond.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Field-level crop yield estimation using phenometrics from LAI time-series analysis and weather data in a machine learning framework

Authors: Francesco Nutini, Federico Filipponi, Andrea Ferrarini, Michele Croci, Piero Toscano, Mirco Boschetti
Affiliations: CNR-IREA, CNR-IGAG, Università Cattolica del Sacro Cuore, CNR-IBE
Predicting crop yield is a compelling challenge due to climate change, growing global population and fluctuations of commodities prices. Observing yield variability in their spatial and temporal dimension is a prior activity to understand phenomena and to lead thoughtful management of cropping systems. In this context, earth observation (EO) programs have transformed crop monitoring because they offer a unique opportunity to monitor crops with reasonable spatial resolution and near-weekly temporal frequency allowing analysis at farm/field level. Indeed, remote sensing imagery has been utilized in several data-driven yield estimation approaches, leveraging various techniques such as parametric regression, machine learning algorithms, and statistical models. Past scientific literature highlights some important advice to estimate yield with EO data that were followed by this work, such as the exploitation of time-series (comprehensive information of seasonal dynamic) of bio-physical parameters (direct quantitative indicator plant grown and production) rather than single dates of vegetation indices. The primary objective of the work presented here is to estimate yield of cereals at field level exploiting time-series of LAI and meteorological variables with a non-parametric regressive approach. Side goal are 1) to identify which features and data are more important in yield estimation and 2) to demonstrate that the usage of phenometrics derived from LAI time-series improved yield estimation rather than using seasonal indicator derived from fix crop calendar. The analysis was conducted in North of Italy on two area of interest (AOI). The first covers the largest Italian farm (~3800 ha) in Ferrara province (north-east Italy) and the second comprise an area of in Piacenza province (north-west Italy) for which field observation were available. In the former, 223 field data of winter cereals (spelt, wheat, barley) were manually collected at plot level in the framework of various scientific projects at the end of 5 cropping seasons (2020-2024). This collection of high-quality field data is exploited as calibrating dataset. Testing dataset is structured to assess model performance and exportability in time (Time Test - TT same location different season) and space-time (Spatial Test ST - different location and season). The test set is made by 107 yield at field level from the first (TT – 57 samples) and the second (ST- 50 samples) AOI, provided by farms archives from 2020 to 2022 cropping seasons. Over the two AOIs, Sentinel-2 L2 data were downloaded from THEIA archive (atmospherically corrected with MAJA)and LAI maps computed using biophysical processor (Weiss et al., 2016). Copernicus ERA5 datasets of daily temperature and rainfall were downloaded from Google Earth Engine's data catalogue. The aim here is to use these data to depict crop growing during seasons (LAI) and abiotic stressors (meteorological data)to highlight drought condition and area that faced water shortage. These datasets represent the depend variables for yield estimation, while yield data represent the independent target variable. LAI time-series in correspondence of yield data (i.e 223 plots – calibration set- and 107 fields – test set) were exploited to compute phenological metrics (phenometrics). To do so, gap filling and interpolation of LAI was done with R package {sen2rts} (https://ranghetti.github.io/sen2rts), while phenometrics (e.g. start and peak of season) were get with a method inherited from R package {phenopix} – Filippa et al., (2016.) Phenometrics were exploited to compute 40 regressors from LAI (e.g. LAI at peak of season etc.) and ERA5 time-series (e.g. cumulated rainfall before flowering etc). Moreover, other regressors were computed on fixed period according to cropping calendar, in order to check the actual contribution given by dynamic information on space and time provided by spatial explicit EO derived phenological information. After a run of a feature selection algorithm (Boruta test) to get rid of autocorrelated features, the selected regressors are exploited in multiple random forest (RF) trials (R package {caret}). Accumulated Local Effects (ALE) plots are used to investigate how selected features (production/growth proxies) influence yield. Best model is then compared with 107 field average, aiming at testing the RF on data with different origin (farm declaration rather than experimental field sampling) and AOIs. First tests conducted with RF show promising results in cross-validation (r^2 0.66, RMSE 1.44) and that most important regressors are “LAI related”: 1) LAI value at peak of season, 2) LAI cumulate from start to end of season and 3) rate of senescence. Driving proxy is also rain cumulated between peak and end of cropping season. A first validation shows average results on the first (r^2 = 0.53, RMSE = 1.23) and second (r^2 = 0.44, RMSE = 0.95) AOIs. Investigations of the presence of outliers and unreliable data in validation dataset is still to be done. Top-ranked LAI regressors shows features that were always exploited by EO time-series studies to monitor agro-ecosystems (e.g. seasonal LAI cumulate, see Prince 1991) and ALE plots shows that, as expected, the higher are these proxies, the more yield is produced. On the other hand, ALE plot for the best “weather proxy” shows that the wetter is the end of cropping season, the lower is production. This can highlight unfavourable abiotic conditions (e.g. floral sterility, loading, etc.) and potential biotic stressor (e.g. fungus) that impact on plant health and grain filling processes. These aspects should be further investigated with plant physiologists to ensure biologically sound interpretations and to select meteorological metrics a priori based on expert knowledge. Potential constrain to the approach are due to conditions where LAI behaviour does not match with obtained yield (i.e. high yield for LAI with low maximum value and vice versa). These peculiar cases and their potential causes (e.g. flower sterility, plant lodging, field sample not representative at satellite scale) will be showed and discussed. Future activities will be first focused on thoroughly validating the estimation on the two AOIs and on getting more yield data to check RF robustness on other AOIs. Moreover, more meteorological variables from ERA5, such as temperature for heat stress and potential and actual evapotranspiration as indicator of water stress, will be included in RF trials. Once calibrated, our view is to apply the model to the whole Po valley (Italy) to spot which districts faced yield loss during a severe drought event that impacted north of Italy in 2022, hence that are more likely to face the same issue in the very next future.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall F2)

Presentation: Characterization of crops sequences in Argentina over six growing seasons using satellite derived crop type maps.

Authors: Diego De Abelleyra
Affiliations: Instituto Nacional de Tecnología Agropecuaria (INTA)
Soybean area in Argentina has increased significantly over the last decades, from 5 Mha in 1990 to 16 Mha in 2022. Nowadays, soybean is the main planted crop followed by maize (9 Mha) and wheat (7Mha). When soybean is planted as a single crop in a season produce very few residues which are quickly degraded because of its low C/N ratio. In contrast, cereal crops generate significantly more residues which are degraded slowly. Increases in commodity prices can lead to the continuous planting of soybean because of its differential gross margin, generating risks of soil degradation and uncertainty regarding the sustainability of agricultural production. This work analyzed crop sequences along six growing seasons using as base information the Argentina National Map of Crops (2018/2019 to 2023/2024). These maps were generated with supervised classification methods using Landsat 8 and 9 and Sentinel 2 satellite images, and in situ data obtained from on-road surveys throughout the agricultural regions of Argentina, registering georeferenced samples at different times in each season. Mapped classes included: soybean, maize, winter cereals, sunflower, peanut, common bean, cotton and sugarcane. Summer crops can be planted as single crops, or as double crops with a preceding winter / spring crop like wheat, barley or sunflower. As the combination of 12 classes of crops over six growing seasons resulted in nearly 60.000 sequences, several indices were used to map and describe the sequences: i) cropping intensity, ii) proportion of early soybean and iii) proportion of cereals in the sequence. The more frequent 20 sequences represented nearly 25 % of the agricultural area and only included three crops: soybean, maize and winter cereals. Two crop rotations accounted for the first five most frequent sequences. A rotation of two crops, with maize and soybean as single crops per season, represented nearly 8 % of the area. A a three-year rotation of i) single crop maize, ii) single crop soybean, and iii) a winter cereal / soybean double crop represented 5 % of the area. Other relevant sequences more proportion of single crop soybean. Cropping intensity showed that nearly 36 % of agricultural area is planted with only one crop per season. This was partially observed in areas with lower precipitation, but also in areas with high precipitation in the agricultural belt. Nearly 20 % of the area showed sequences with four or more single soybean crops in the sequence, which can be a risk for sustainability of production. These cases were mostly located in the agricultural belt, near ports and agro-industrial areas. The number of cereals in the sequence showed that near 70 % of the agricultural area showed three years with cereals (mainly maize or wheat) over a six-year sequence and were predominantly observed in high precipitation regions which allow the planting of double crops. Even though some areas showed undesirable proportions of early soybean in the sequences, most of the agricultural area included at least one cereal every two years. This ensures certain levels of carbon inputs that can contribute to maintaining soil health and sustain production levels. There is a margin for improving sequences, for example including more crops per year or reducing the frequency of early soybean. Achieving this requires not only considering environmental aspects like precipitation, but also socio-economic aspects that are also relevant for farmers’ decisions of planting.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Session: D.02.13 AI-based Methods for EO data compression

Earth Observation (EO) sensors are acquiring Big Data volumes at very high data rates (e.g., the Copernicus missions produce 12 TB of data per day). In particular, next generation SAR systems will offer a quantum leap in performance using large bandwidths and digital beam forming techniques in combination with multiple acquisition channels. These innovative spaceborne radar techniques have been introduced to overcome the limitations imposed by classical SAR imaging for the acquisition of wide swaths and, at the same time, of finer resolutions, and they are currently being widely applied in studies, technology developments and even mission concepts conducted at various space agencies and industries. Such significant developments in terms of system capabilities are clearly associated with the generation of large volumes of data to be gathered in a shorter time interval, which, in turn, implies harder requirements for the onboard memory and downlink capacity of the system. Similar considerations can be drawn with respect to optical sensors, such as multispectral and hyperspectral ones, which provide nowadays large amounts of images at high resolution. Therefore, the proper quantization/compression of the acquired data prior to downlink to the ground is of utmost importance, as it defines, on the one hand, the amount of onboard data and, on the other hand, it directly affects the quality of the generated EO products.

EO data show unique features posing important challenges and potentials, such as learning the data models for optimal compression to preserve data quality and to avoid artefacts hindering further analysis. For instance, based on the peculiarities of the imaged scene (e.g., in radar imaging these are characterized by the reflectivity, polarization, incidence angle, but also by the specific system architecture, which may offer opportunities for efficient data quantization; differently, multispectral data are characterized by the land cover or the presence of clouds), a more efficient data representation can be achieved by searching for the best quantizer and the ad-hoc tuning of the inner quantization parameters. Additionally, onboard preprocessing of the acquired data to a sparse domain (e.g., range compression in the case of SAR data) can also lead to a more compact data representation, which could aid small missions with limited on-board memory.

Artificial Intelligence (AI) represents one of the most promising approaches in the remote sensing community, enabling scalable exploration of big data and bringing new insights on information retrieval solutions. In the past three decades the EO data compression field progressed slowly, but the recent advances in AI are now opening the perspective of a change of paradigm in data compression. AI algorithms and onboard processing could be exploited to generate/discover novel and more compact data representations, obtain an EO data quality to satisfy the the cal/val requirements that ensure the consistency of the physical parameters to be extracted, and open new perspectives for on board intelligence and joint ground-space processing, i.e., edge computing.

This session would like to bring to the field new methodologies for both loss-less and lossy compression of remote sensing data. Several data compression topics are welcomed to the session, which include (but are not limited to): data-driven and model-based compression methods, Kolmogorov complexity-based algorithms, source coding with side information, neural data compression, compression of correlated sources, integrated classification and compression, semantic coding, big data compression and application-oriented compression.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Efficient Raw Data Compression for Future SAR Systems

Authors: Dr. Michele Martone, Nicola Gollin, Student Rafael Bueno Cardoso, M. Sc. Marc Jäger, Dr. Rolf Scheiber, M. Sc. Simon Nolte, Dipl. Wirtsch.-Ing. Jamin Naghmouchi, Dr. Gerhard Krieger, Dr. Paola Rizzoli
Affiliations: German Aerospace Center (DLR), Universität zu Lübeck (UzL)
Synthetic aperture radar (SAR) represents nowadays a well-established technique for a broad variety of remote sensing applications, being able to acquire high-resolution images of the Earth’s surface, independently of daylight and weather conditions. In the last decades, innovative spaceborne radar techniques have been proposed to overcome the limitations which typically constrain the capabilities of conventional SAR for imaging wide swaths and, at the same time, achieving fine spatial resolutions. In addition to that, present and future spaceborne SAR missions are characterized by the employment of multi-static satellite architectures, large bandwidths, multiple polarizations and fine temporal sampling. This inevitably leads to the acquisition of an increasing volume of on-board data, which poses hard requirements in terms of on-board memory and downlink capacity of the system. This paper presents an overview of the research activities in the field of SAR raw data compression which have been developed in the last years or are currently under investigation at the Microwaves and Radar Institute of the German Aerospace Center (DLR). In particular, we investigate research assets for data volume reduction in multi-channel SAR [1], [2]. These systems allow for high-resolution imaging of a wide swath, at the cost of the acquisition and downlink of a huge amount of data. Together with the intrinsic requirements related to resolution and swath width, the high data volume is due to the fact that the effective pulse repetition frequency (PRF) generated by the multiple channels is typically higher than the processed Doppler bandwidth, which introduces a certain oversampling in the raw data in azimuth. In this context, convenient data volume reduction strategies are proposed, based on Doppler-based transform coding (TC) or linear predictive coding (LPC), which aim at exploiting the existing correlation between subsequent azimuth samples. We consider realistic multi-channel SAR system architectures and simulate multi-channel raw data using synthetic as well as real backscatter data from TanDEM-X. We analyze the statistical properties (such as autocorrelation and Doppler power spectrum) exhibited by the multi-channel raw signal and discuss the impact of relevant system parameters, highlighting potential and limitations of the proposed approaches as a trade-off between achievable data volume reduction and performance degradation. Furthermore, we address some of the above-mentioned challenges and limitations in terms of data transfer and downlink in the frame of the Horizon Europe project SOPHOS (Smart on-board processing for Earth observation systems) [3]. The main goals of SOPHOS consist in the design and implementation of enabling technology for high-end data products generated on board spacecraft via the implementation of power-efficient high-performance space processing chains for various Low-Earth Orbit (LEO) missions. The main focus is on Synthetic Aperture Radar (SAR) and, in this scenario, we develop algorithms aimed at improving on-board SAR raw data compression. For this purpose, the performance-optimized block-adaptive quantization (PO-BAQ), recently developed by the authors, is proposed. PO-BAQ [4] extends the concept of state-of-the-art block-adaptive quantizer (BAQ) and allows for jointly optimizing the resource allocation and the resulting SAR image degradation due to quantization. Since quantization errors are significantly influenced by the local distribution of the SAR intensity, such an optimization is achieved by exploiting a-priori knowledge about the SAR backscatter statistics of the imaged scene. Given the severe constraints imposed by the downlink capacity, the optimized on-board data compression proposed in the SOPHOS project allows for better/customized quality data for all large scale, global monitoring and time series applications. Furthermore, the SAR performance-optimized quantization optimizes the overall, global product quality for a given downlink budget and, in this way, it allows for an increase of the system acquisition capability and, ultimately, for more continuous observations. In the frame of SOPHOS, efficient on-board SAR image formation is also considered: for this purpose, SAR acquisitions may need to be processed in blocks due to constraints imposed by the available computational resources, and the necessary processing steps are applied to each block of SAR raw data such that the outputs are concatenated to obtain the final image formation result. The overall SAR image formation workflow consists of a fixed sequence of processing steps, including range and azimuth compression, antenna pattern compensation and image generation, which is stored on board for later transmission to ground, also allowing for a significant reduction of the resulting data volume. In addition, we investigate the suitability and potential of SAR raw data transformations, with focus on the JPEG2000 and polar-based compression, with the goal of optimizing and potentially reducing the resulting data rate. Finally, DLR is member of the Consultative Committee for Space Data Systems (CCSDS), a multi-national forum for the development of communications and data systems standards for spaceflight. In particular, the authors currently support the Data Compression Working Group in collaboration with other research institution (including ESA, NASA, CNES) with the main objective of defining and standardizing data compression methods for SAR systems. At the Symposium we will present the latest investigations as well as an outlook on the future activities of the Working Group. [1] M. Martone, M. Villano, M. Younis, and G. Krieger, Efficient onboard quantization for multichannel SAR systems, IEEE Geoscience and Remote Sensing Letters 16 (12), pp. 1859-1863, Dec. 2019. [2] M. Martone, N. Gollin, E. Imbembo, G. Krieger, and P. Rizzoli, Data Volume Reduction for Multi-Channel SAR: Opportunities and Challenges, EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, Munich, Germany, pp. 243-248, Apr. 2024. [3] M. Martone, N. Gollin, M. Jäger, R. Scheiber, M. Taddiken, O. Bischoff, D. Smith, O. Flordal, M. Persson, C. Bondesson, V. Kollias, N. Pogkas, S. Nolte, and J. Naghmouchi, Smart On-Board Processing for Earth Observation Systems: the SOPHOS Project. On-Board Payload Data Compression (OBPDC) Workshop, Las Palmas de Gran Canaria, Spain, Oct. 2024. [4] M. Martone, N. Gollin, P. Rizzoli and G. Krieger, Performance-Optimized Quantization for SAR and InSAR Applications, IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, Jun. 2022.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Using adaptive grids for the compression of ERA5 meteorological reanalysis data

Authors: Farahnaz Khosrawi, Adrian Kolb, Lars Hoffmann, Siegfried Müller
Affiliations: Jülich Supercomputing Centre, Forschungszentrum Jülich, Institut für Geometrie und praktische Mathematik, RWTH Aachen
The continuous increase in computational power comes with an equivalent demand of storage space. However, the ability to store data has hardly increased in recent years. This makes the demand for efficient storage solutions even more eminent, especially, for e.g. meteorological reanalysis data. The current European Centre of Medium-Range Weather Forecasts (ECMWF) ERA5 reanalysis data poses already a high challenge for the community, but with the upcoming ERA6 which will have a much higher resolution, significantly more storage space will be needed. An efficient way to store data is to use either lossy or lossless data compression to store the data with lower storage requirements. To compress the meteorological data, we perform a multiresolution-analysis using multiwavelets on a hierarchy of nested grids. Since the local differences become negligibly small in regions where the data is locally smooth, we apply hard thresholding for data compression. Thereby, we transform the data from a regular cartesian grid to an adaptive grid that keeps a fine resolution in areas where it is necessary, but otherwise coarsens the grid. This approach of compressing the data results in a high compression rate while preserving the accuracy of the original data. This compression strategy has been implemented into the Lagrangian model for Massive-Parallel Trajectory Calculation (MPTRAC) and successfully applied to ERA5 data. Application to the upcoming ERA6 data and to satellite observations are planned for the future.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: AI for Performance-Optimized Raw Data Quantization in Future SAR Systems

Authors: Nicola Gollin, Dr. Michele Martone, Max Ghiglione, Dr. Gerhard Krieger, Dr. Paola Rizzoli
Affiliations: Microwaves and Radar Institute, German Aerospace Center (DLR), Radio Frequency Payloads and Technology Division, European Space Agency (ESA)
In next-generation synthetic aperture radar (SAR) systems, performance is advancing through increased bandwidth, multiple polarization, more complex acquisition methods exploiting digital beamforming (DBF) and multichannel and multistatic configurations. These technologies enable high-resolution wide-swath polarimetric and interferometric acquisitions, significantly enhancing temporal sampling and data coverage. This capability, in upcoming missions like NISAR and Sentinel-1 Next-Generation, introduces as drawback a substantial increase of data volumes, which needs storage and high-speed downlink, requiring efficient onboard data quantization methodologies. Block-Adaptive Quantization (BAQ) [1] is a state-of-the-art SAR raw data quantization method, achieving a balance between complexity, signal fidelity and resulting data volume by adapting quantization levels to raw data block statistics. The main limitation of the BAQ consists in the usage of uniform quantization rate throughout the scene, resulting in different performance quality depending on the backscatter variability throughout the imaged area. A further development of this method is the Flexible Dynamic BAQ (FDBAQ) [2], which is implemented in Sentinel-1, and includes an adaptive bit allocation based on the scene's signal-to-thermal-noise ratio (STNR), exploiting look-up tables (previously derived from global backscatter maps). However, the FDBAQ carries out the bitrate allocation without considering the actual performance degradation in the resulting high-level SAR products and applications. In particular, the local variability and inhomogeneities in the backscatter distribution strongly impact the resulting quantization degradation, requiring a direct link between the quantization settings and the focused SAR domain to be properly handled. An attempt to close this gap is represented by the Performance-Optimized BAQ (PO-BAQ) [3], which is based on the estimation of a two-dimensional, spatial-variant bitrate allocation map in the SAR raw data domain depending on the final performance requirement defined on the higher-level SAR and InSAR products. In order to estimate the local distribution of the SAR intensity and, in particular, its degree of homogeneity, the PO-BAQ exploits a priori knowledge on the SAR backscatter statistics of the imaged scene. This information allows for deriving two-dimensional bitrate maps (BRM) which must be either available on board (stored or uplinked) before commanding. For these reasons, the PO-BAQ is not fully adaptive to the acquired scene, since the quantization settings are derived from prior considerations and do not directly account for the local conditions at the time of the SAR survey. In present years, deep learning (DL) methods have shown promise for data compression. While traditionally applied to fully focused SAR images, recent efforts aim to adapt DL methods for SAR raw data compression. Nevertheless, the topic has remained highly unexplored, mainly due to the lack of spatial correlation and self-similarity among samples typically observed in the raw data domain, which complicates the task of pattern recognition. In this work, we propose a novel deep learning-based method for performing a dynamic and adaptive onboard bitrate allocation to feed a space-varying BAQ. The principle is that a direct link between the raw data and the focused domains can be achieved through a DL model, without the need for a complete SAR focusing. This allows for achieving a certain desired performance in the final focused SAR product thanks to a dynamic allocation of quantization bits, which only depends on the raw data characteristics and on the desired quality of the output SAR/InSAR products (e.g., in terms of signal-to-quantization noise ratio, interferometric phase error and noise equivalent sigma zero). In this contribution, different examples of SAR and InSAR target performance parameters are considered to train, validate and test a specific DL architecture on real TerraSAR-X and TanDEM-X raw uncompressed dataset (i.e., acquired without applying any quantization after digitization, i.e. without quantization noise) covering different landcover types to perform efficient performance optimized bit allocation. [1] - Kwok, R., & Johnson, W. T. (1989). Block adaptive quantization of Magellan SAR data. IEEE Transactions on Geoscience and remote sensing, 27(4), 375-383. [2] - Attema, E., Cafforio, C., Gottwald, M., Guccione, P., Guarnieri, A. M., Rocca, F., & Snoeij, P. (2010). Flexible dynamic block adaptive quantization for Sentinel-1 SAR missions. IEEE Geoscience and Remote Sensing Letters, 7(4), 766-770. [3] - Martone, M., Gollin, N., Rizzoli, P., & Krieger, G. (2022). Performance-optimized quantization for SAR and InSAR applications. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-22.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: CHIMERA: AI-Based Lossless Data Compression Revolutionizing Efficiency and Scalability for Big Data Applications and the Space Industry

Authors: Andrea Cavallini, Marco Uccelli, Leticia Pérez Sienes, Angie Catalina Carrillo Chappe
Affiliations: Starion Group
Data compression has become a cornerstone of modern information technology, addressing the global demand for efficient storage and transmission of ever-increasing data volumes. As industries worldwide generate and rely on vast datasets, effective compression solutions are essential to overcome limitations in capacity and bandwidth. This need is particularly acute in big-data-driven sectors, where managing the scale, complexity, and accessibility of information is critical to ensuring seamless global operations and technological advancement. In the space industry, this challenge is even more pronounced. With advancements in satellite technology, higher-resolution sensors, and the increasing number of satellites in constellations, the volume of collected data has surged dramatically. Managing this data places immense pressure on storage systems and transmission bandwidth, making efficient compression systems indispensable. Moreover, the storage of this vast amount of data translates to significant economic implications, as it demands highly capable and often costly storage solutions. These demands are pushing industries across the board, from telecommunications to Earth Observation, to adopt innovative compression techniques that enable the handling of massive datasets without compromising quality or accessibility. Artificial Intelligence (AI) is now transforming this domain by introducing adaptive, learning-based methods that optimize efficiency and handle diverse data types. Unlike traditional techniques, AI can learn intricate patterns in data also in a faster way, enabling compression that preserves information and scales to the growing demands of modern applications. The potential of AI is particularly evident in fields like Earth observation, where satellites are transitioning from static images to continuous video, creating massive data streams. Data compression has evolved to meet the need for efficiency while maintaining quality. Early lossless methods, like Run-Length Encoding (RLE), ensured data could be perfectly reconstructed, making them ideal for text or scientific use. The rise of multimedia led to lossy techniques like JPEG and MP3, which traded fidelity for higher compression ratios. Despite these advances, traditional methods rely on fixed rules, limiting their adaptability to diverse and complex datasets. Furthermore, traditional compression systems often struggle with already highly optimized or compressed data. For example, compressing a JPEG image or H264/265 video file with tools like ZIP typically results in minimal, if any, size reduction. This limitation underscores the need for more sophisticated approaches that can extract further redundancies even from optimized data. More recently, algorithms like cmix and NNCP (Neural Network Compression) have demonstrated the potential of neural networks in data compression. The cmix, a state-of-the-art lossless compressor, combines prediction models with large-scale neural networks to achieve remarkable compression ratios, albeit with significant computational demands. Similarly, NNCP leverages neural networks to predict data patterns, enabling high compression efficiency while retaining the lossless property. Losslessness is crucial in fields like scientific research, medical imaging, legal documentation, and space exploration, where even minimal data loss can compromise accuracy, integrity, or safety. These applications demand advanced compression techniques that preserve every bit of information while improving efficiency, ensuring data remains intact and usable for critical tasks. Building on these advancements, our approach introduces a versatile AI-driven lossless compression algorithm based on transformer networks, capable of handling diverse data types, including text, audio, and images. Leveraging the power of transformers, the algorithm dynamically adapts to the unique features of each data type, ensuring efficient compression while maintaining precise reconstruction. Through technical refinements in the network's architecture and optimization techniques, our approach improves both compression performance and computational efficiency. Initial testing demonstrates that the algorithm achieves compression results comparable to cmix and NNCP, while significantly outperforming both in encoding and decoding speeds. The algorithm’s performance has been evaluated using established metrics, with results demonstrating its competitiveness across all tested domains. In terms of compression ratio, the algorithm achieves performance on par with or exceeding the state of the art, showing an average improvement of approximately 3% over the best existing methods. Additionally, encoding and decoding speeds have been significantly optimized, delivering an average increase in efficiency of 60% compared to previous approaches. Testing was conducted on standard benchmarks representing a diverse range of data modalities, including text, audio, and images, evaluated across both homogeneous and heterogeneous datasets. The algorithm consistently achieved substantial reductions in data size while losslessly reconstructing the original data, affirming its reliability and precision. These results highlight its adaptability and robustness, establishing it as a versatile solution for compression across multiple domains. This technology offers significant advantages for a wide range of stakeholders, including space agencies, private satellite operators, and research institutions managing vast satellite data collections. Its adaptability makes it particularly well-suited for applications such as environmental monitoring, disaster response, and Earth observation, where robust data handling is critical. Additionally, the algorithm’s ability to operate on-board satellites allows for original data compression at the source, significantly reducing downlink bandwidth requirements or enabling the transmission of greater data volumes without the need to expand existing downlink capabilities. This feature optimises the utilisation of communication resources, ensuring that critical information reaches the ground more efficiently. Beyond its operational advantages, the technology directly addresses the high costs and complexities of long-term data preservation. By ensuring secure, efficient, and accessible storage of valuable datasets over extended periods, it mitigates the financial and logistical burdens associated with maintaining massive data archives. Moreover, its versatility is evident in scientific research fields such as genomics, climate modelling, and astrophysics, where managing vast datasets is essential for driving innovation and advancing knowledge. By supporting both on-board compression for real-time data optimisation and sustainable long-term archiving, this technology provides a transformative solution for data-intensive industries and research disciplines, enhancing efficiency and impact at every stage of the data lifecycle. It offers a transformative solution to managing the growing volume of data generated across industries, paving the way for more accessible and actionable information in the years ahead.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Hall L1/L2)

Presentation: Complex-Valued Autoencoder-Based Data Compression Scheme for SAR Raw Data

Authors: Dr. Reza Mohammadi Asiyabi, Prof. Andrei Anghel, Mihai Datcu, Dr. Adrian Focsa, Dr. Michele Martone, Dr. Paola Rizzoli, Ernesto Imbembo
Affiliations: Politehnica Bucharest, CEOSpaceTech, German Aerospace Center (DLR), European Space Agency (ESA-ESTEC)
Next generation SAR systems will offer improved performance, using large bandwidths, digital beam forming techniques, and multiple acquisition channels. These new radar systems are designed to overcome limitations of traditional SAR imaging sensors, enabling wider coverage and better resolution. They are being widely explored by space agencies and industries to be employed in the next generation SAR systems. Such significant developments in terms of system capabilities lead to large volumes of data to be acquired in a shorter time interval, which, in turn, implies harder requirements for the onboard memory and downlink capacity of the system. Consequently, the proper quantization and compression of SAR raw data is of utmost importance, as it defines, on the one hand, the amount of onboard data to be transferred or stored, and, on the other hand, it directly affects the quality of the generated SAR products. These two aspects must be traded off due to the constrained acquisition capacity and onboard resources of the system. Lossy data compression techniques are employed to reduce the size of the acquired SAR raw data without sacrificing critical information. By compressing the data, the required downlink bandwidth is significantly reduced, enabling efficient transmission of SAR data from the satellite to the ground station. Moreover, data compression is essential for onboard memory management. SAR satellites have limited onboard storage capacity, and efficient data compression algorithms allow for storing larger amounts of data within the available memory. This enables longer data acquisition periods and increased mission flexibility, as SAR systems can acquire and store more data before the need for data offloading. Effective data compression techniques are essential for maximizing the utility of SAR systems. In this work we present a complex-valued autoencoder-based data compression method (developed in the ESA project ARTISTE - “Artificial Intelligence for SAR Data Compression” ), which introduces a new perspective for SAR data compression that goes beyond the complex-valued numbers and basic SAR processing. It unlocks the huge potential of the complex-valued networks for development of the neural data compression methods for raw data compression, while preserving the original properties and phase information of the SAR data. The developed method is a standalone data compression method based on the complex-valued autoencoder architecture that can replace the conventional data compression techniques and provide efficient data compression for future SAR missions with AI perspective. Additionally, with the increasing interest in onboard processing, complex-valued deep architectures (e.g., autoencoder data compression) can lay the foundation for deep learning-based onboard processing (e.g., classification and object recognition). In lossy data compression algorithms usually an alternative representation of the data in another space is found to be quantized. Conventional data compression algorithms use a fixed transformation model and cannot be adapted to the statistics of the data. However, in neural data compression methods, a neural network is trained to transform the data into the embedded features, considering the statistics and distribution of the data and providing a more adaptive transformation model, hence a lower data loss. In the proposed autoencoder network, the encoder architecture comprises of several complex-valued convolutional layers followed by complex-valued Generalized Divisive Normalization (GDN) layers. The encoder represents the input image patch in the latent space as the embedded features. Later a quantization module is used to quantize the embedded features into a discrete-valued representation. Since the derivative of the quantization function is zero almost everywhere, during the training, the quantization module is replaced by uniform noise to maintain the gradient for the backpropagation algorithm and training the network. However, after training, actual quantization is used. The quantized embedded feature maps are discrete-valued and can be losslessly compressed into a bitstream, using an entropy coding method such as arithmetic encoding. The resulting bit stream is the compressed data and is transferred or stored. To decompress and reconstruct the data, the arithmetic decoder (with the same entropy model) recovers the embedded feature maps from the compressed bit stream. Later, the decoder consisting of several complex-valued transpose convolutional and complex-valued inverse GDN layers is used to reconstruct the data from the embedded representation. Rate-Distortion (RD) loss is used for training the complex-valued autoencoder-based compression network. RD loss basically has two main terms: the rate loss and the distortion loss. The rate term in the loss function estimates the minimum number of bits required on average to store the embedded bitstream, based on the distribution of the embedded features. However, since this distribution is unknown, the rate term is estimated using the Shannon cross-entropy between the real distribution and the estimated distribution model of the symbols in the embedded features. On the other hand, the distortion term is the pairwise distortion metric between the input and the output images and is computed using the well-known Mean Square Error (MSE) measure. There is a trade-off between the rate and distortion terms where a higher rate allows for a lower distortion, and vice versa. So, the loss function used for training the compression network is the weighted sum of these two terms. The weight controls the tradeoff between these two losses and enables us to achieve different rates for different applications The ability of the complex-valued deep architectures to learn the complex distribution of SAR data and preserve the original properties and phase information of the SAR data is evaluated using Sentinel-1data acquired in stripmap mode. The datasets, used to train the autoencoder, consists of three Sentinel-1 SAR scenes acquired over Chicago and Houston, United States (US), and Sao Paulo, Brazil to include various landcovers (e.g., different constructed areas, agriculture, vegetation, and water bodies). To use the dataset in the deep architecture, the SAR scenes are divided into non-overlapping patches of 256×256 pixels. Later, different Sentinel-1 scenes are used as test data to evaluate the performance of the trained network (one scene acquired over the island of Fogo, Cape Verde, and one over Amsterdam, The Netherlands). It is worth mentioning that the hardware implementation feasibility and efficiency of the developed complex-valued autoencoder-based data compression method is also evaluated within the ARTISTE project. Unavailability of uncompressed SAR raw data is a restricting factor for development of novel data compression techniques. For instance, available raw SAR data from Sentinel-1 (Level-0 products) are already FDBAQ compressed and the decoded raw data has non-uniform quantization. As a result, we define a procedure [1] to add the quantization noise to the decoded raw data in order to obtain the uniformly quantized raw data that resembles the statistics of the uncompressed raw data onboard the SAR missions. In this way, we would have the raw data samples on the right number of bits, and with similar statistics to the uncompressed raw data. The developed method is compared and benchmarked against the well-accepted BAQ as well as the JPEG2000 compression standards. A few studies have utilized JPEG2000 algorithm for detected SAR data compression. JPEG2000 also has been used as one of the base methods for comparison in many studies, focused on detected SAR data compression. Within the ARTISTE project we extended the application of JPEG2000 compression method into the realm of complex-valued SAR raw data and employed JPEG2000 for SAR raw data compression (applied separately to the real and imaginary components (i.e., I and Q) of the SAR raw data) [2]. The reason why JPEG2000 works also for raw data is related to the presence of the low-pass filters from the wavelet decomposition present in the JPEG2000 processing chain. In baseband, the instantaneous frequency of a range chirp varies linearly and reaches zero frequency in the middle of the chirp (the azimuth chirps have a similar behavior, but the zero-frequency point may be slightly shifted due to the Doppler centroid). Each chirp signal (corresponding to a target) has a relatively slow variation around the zero-frequency point. This slowly varying region can respond quite well to a “partial” matched filter that consists in a boxcar window (a low-pass filter), which can be easily implemented with a sliding window summation. The resulting 2D signal is a badly focused (low resolution) SAR image that exploits only a narrow range/azimuth bandwidth around the zero-frequency points of the chirps. Hence, the low-pass filters from the JPEG2000 algorithm generate badly focused images that have some degree of spatial correlation, which can be exploited for compression. At the Living Planet symposium, we aim to present the architecture of the complex-valued autoencoder, the procedure used to generate uniformly quantized data starting from the Sentinel-1 FDBAQ compressed and decoded data and the obtained data compression results in comparison with BAQ and JPEG2000 compression methods. This study is under review for publication at IEEE Journal of Selected Topics in Signal Processing (J-STSP). [1] R. M. Asiyabi et al., "Adaptation of Decoded Sentinel-1 SAR Raw Data for the Assessment of Novel Data Compression Methods," IGARSS 2024, Athens, Greece, 2024, pp. 2541-2545. [2] R. M. Asiyabi et al., "On the use of JPEG2000 for SAR raw data compression," EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, Munich, Germany, 2024, pp. 249-253
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Session: F.04.20 EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115) - PART 1.

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Mapping Coffee Farms in Colombia: How Does Agroforestry Design affect RS-Based coffee Detection?

Authors: Yaqing Gou, Yudhi Tanasa, Claudia Paris, Xi Zhu, Mila Luleva
Affiliations: Rabobank, ITC, University of Twente
Application of remote sensing to the monitoring of agricultural fields represents an important advance in the improvements of agricultural management and productivity. The recent demands for remote sensing technology in supporting companies' due diligence processes to meet compliance requirements has posed new requirements on the reliability of remote sensing data and the associated risks due to uncertainty. Environmental and sustainability regulations such as EUDR has heavily focused on the monitoring of permanent crops including coffee and cocoa. Increasing efforts have been put into remote-sensing-based coffee monitoring with successful examples utilising emerging technologies such as deep learning and data fusion. Accurate mapping of coffee farms remain challenging, due to the complex structural charateristics (e.g. tree height, density) relate to the shaded agricutlure system. How are the coffee crops cultivated along with the shade trees is heavily influenced by the agrofroestry design. In this study, we would like to explore wether we can map coffee farms with various ratio of shade trees in Colombia using midium to high resolution opticla and SAR data. First, we developed a deep learning model using the spectral bands, vegetation index and the texture information derived from PlanetScope and Sentinel-2 imagery, backscatter from Sentinel-1 and ALOS PALSAR Radar sensors, and the tree height information from the recent released tree height product from Meta. The training data is derived from Lidar and orthophotos. The model is validated using the in-situ data collected in 2021, including information of the tree species and the number of trees per species. We reclassed the in-situ data by the ratio of shade trees into 4 groups: 0-25%, 25%-50%, 50%-75%, and 75%-100%. The model’s accuracy is validated for each group. The results indicated that vegetation index including NDVI, EVI, LSWI, NDRE, and MSI, textural information are important features for coffee classification. The model’s accuracy dropped from 80% to 56% as the ratio of shade tree increases.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Starling

Authors: Product Marketing Manager Sustainability Montaine Foch
Affiliations: Airbus
Born in 2016 from a collaborative venture between Airbus and Earthworm Foundation, Starling is a geospatial solution designed to measure the environmental impact across entire supply chains, aiding in the delivery of deforestation-free and net-zero commitments. Starling uses various sources of satellite imagery such as Sentinel data and Airbus’ own constellation including Pléiades Neo, to monitor vegetation cover down to 30cm resolution. With an unrivalled level of detail, Starling’s basemap includes a reference layer that differentiates natural forest from forest plantation, planted forest and agroforestry, accurately detect deforestation. With Starling, companies rely on accurate and actionable data to engage their suppliers, mitigate risks and verify commitments. Starling delivers easy-to-use intelligence through an intuitive digital platform and tailored reports. Starling also supports companies in complying with the European Union Deforestation-free Regulation (EUDR) by monitoring suppliers’ activities to help achieve no-deforestation commitments worldwide. We provide this service for companies involved in industries such as palm oil, coffee, cocoa, rubber, soy, timber, as well as pulp and paper. Furthermore, our partner Earthworm Foundation can provide accurate analysis of Starling data through their in-house team of experts and local worldwide field staff in key producing areas. Leveraging data exported from the Starling platform, Earthworm Foundation provides companies with tools to evaluate their zero-deforestation commitments. Action and progress reports can effectively communicate their strategy to internal or external stakeholders. Starling’s 20+ years of time series data on land use change provides the ideal primary data source for calculating carbon footprints. These time series are used by third-party carbon specialists to model carbon stocks as of a specific date, along with carbon sinks and sources over a designated period. Whether companies are already working with a GHG consultant or not, Starling can assist in establishing the most accurate and scientifically-approved approach to calculate carbon footprint adhering to industry standards such as SBTi and GHG Protocol. In parallel, leveraging our high and very high-resolution satellite imagery, Starling automatically generates key analytics on land cover evolution and helps monitor the progress of your forest positive project. Airbus’ satellite constellation, including Pléiades Neo’s with its best-in-market 30cm resolution, ensures exceptional accuracy of information. Our imagery and technology provide a global view of your projects, with a resolution that is ideal for tree counting and tree detection. We collaborate with third-party carbon experts to offer tailor-made solutions that meet your specific needs. Together with Earthworm Foundation, Airbus combines over 10 years of experience in monitoring deforestation worldwide. Our team, located across multiple countries, is dedicated to offering unmatched support to help customers meet their no-deforestation and net-zero target commitments. Leveraging our expertise and technology, we provide a reliable, unbiased and global solution.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Natural Forests of the World: A 2020 Baseline for Deforestation Monitoring and EUDR Compliance

Authors: Maxim Neumann, Anton Raichuk, Yuchang Jiang, Melanie Rey, Petra Poklukar, Keith Anderson, Charlotte Stanton, Dan Morris, Drew Purves, Katelyn Tarrio, Nick Clinton, Radost Stanimirova, Michelle Sims, Sarah Carter, Dr. Liz Goldman
Affiliations: Google Deepmind, University of Zurich, Google Research, Google Geo, World Resources Institute (WRI)
Effective conservation strategies and efforts to mitigate climate change require accurate and comprehensive understanding of global forest cover. This study presents a novel methodology for mapping the extent of natural forests in 2020 at 10 m resolution. Natural forests encompass both primary forests (those that have remained undisturbed by human activity) and secondary forests (those that have regenerated naturally following disturbance). The methodology employs a state-of-the-art deep learning model based on a multi-modal, multi-temporal vision transformer. This innovative approach leverages multiple remote sensing data sources and captures short- and long-term temporal dynamics to provide a nuanced representation of natural forest cover, characterizing both the probability of the area being natural forest and the model's intrinsic uncertainty about its predictions. The resulting global natural forest map for the year 2020, developed to be in alignment with the European Union's Deforestation Regulation (EUDR) and following forest definitions from FAO FRA 2020, provides a critical baseline for monitoring deforestation activities and informing conservation initiatives. This baseline map enables the tracking of changes in forest cover over time, facilitating the identification of areas experiencing deforestation or degradation. Such information is essential for targeting conservation efforts, enforcing regulations, and promoting sustainable land-use practices. Our approach harmonizes ~30 label data sources by training a model for pattern matching of spectral, temporal and spatial/texture signatures of the natural forests. After defining the target label maps, we create the training and evaluation datasets including short-term (seasonal variations) and long-term (multi-year) time series information from Sentinel and Landsat satellites and auxiliary layers (climate, topography). When evaluating on a hold-out dataset of validation examples, which are geographically separate from training data, we obtain a global F1-score of 85.2% (precision: 83.8%, recall: 86.7%). When evaluating on a completely separate Global Forest Management (GFM) validation dataset from 2015, which hasn't been seen during training, after class reprojection we obtained an F1-score of 79.3% (precision: 80%, recall: 78.5%). To further support users, a layer of model uncertainty alongside the estimated probabilities of natural forests is provided. This uncertainty layer acknowledges the inherent limitations of any modeling approach and encourages a cautious interpretation of the results. By explicitly quantifying uncertainty, the study promotes transparency and helps decision-makers assess the level of confidence associated with the mapped forest areas. Until now, there are few global forest and forest type maps, which are suitable for EUDR purposes. Many existing products rely on combining diverse data sources to create a single global layer, which results in different levels of quality and spatial inconsistencies. We suggest using an AI model that when combined with forest mapping from other sources can lead to higher quality and more consistent results (consistent definition, temporal coverage etc.). Our approach supports the implementation of the EUDR by providing a baseline from which deforestation and degradation can be identified. It can also support other voluntary commitments, conservation initiatives, and efforts to protect and restore our most valuable forest ecosystems. At the symposium we will present the methodology, the generated product, evaluation, and insights gained from this "Natural forests of the world" map. References: 1. European Union, REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing. Regulation (EU) No 995/2010 https://data.consilium.europa.eu/doc/document/PE-82-2022-INIT/en 2. FAO: Global Forest Resources Assessment (FRA 2020), Terms and Definitions. https://openknowledge.fao.org/server/api/core/bitstreams/531a9e1b-596d-4b07-b9fd-3103fb4d0e72/content 3. Bourgoin, Clement; Verhegghen, Astrid; Degreve, Lucas; Ameztoy, Iban; Carboni, Silvia; Colditz, Rene; Achard, Frederic (2024): Global map of forest cover 2020 - version 2. European Commission, Joint Research Centre (JRC) [Dataset] PID: http://data.europa.eu/89h/e554d6fb-6340-45d5-9309-332337e5bc26. 4. Hunka, Neha, Laura Duncanson, John Armston, Ralph Dubayah, Sean P. Healey, Maurizio Santoro, Paul May, et al. 2024. “Intergovernmental Panel on Climate Change (IPCC) Tier 1 Forest Biomass Estimates from Earth Observation.” Scientific Data 11 (1): 1127. 5. Mazur, Elise, Michelle Sims, Elizabeth Goldman, Martina Schneider, Marco Daldoss Pirri, Craig R. Beatty, Fred Stolle, and Martha Stevenson. n.d. “SBTN Natural Lands Map – Technical Documentation.” https://sciencebasedtargetsnetwork.org/wp-content/uploads/2024/09/Technical-Guidance-2024-Step3-Land-v1-Natural-Lands-Map.pdf. 6. Lesiv, Myroslava, Dmitry Schepaschenko, Marcel Buchhorn, Linda See, Martina Dürauer, Ivelina Georgieva, Martin Jung, et al. 2022. “Global Forest Management Data for 2015 at a 100 m Resolution.” Scientific Data 9 (1): 199.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Are Freely Accessible Global Forest Maps Suitable as Reference Tools for EUDR Compliance in Deforestation Monitoring?

Authors: Juliana Freitas Beyer, Dr. Margret Köthke, Dr. Melvin
Affiliations: Waldwirtschaft - Thünen-Institut, Waldwirtschaft - Thünen-Institut, Waldwirtschaft - Thünen-Institut
Forest ecosystems provide a wide range of services that are crucial for maintaining life on earth. Therefore, regulatory efforts that may contribute to reducing deforestation and forest degradation, with the ultimate goal of eradicating them globally, are essential for preserving the ecological balance of the planet. A recent measure from the European Union (EU), the regulation on deforestation-free supply chains (EUDR, Regulation (EU) 2023/1115), is one of the latest attempts in promoting policy that aims to decrease deforestation and forest degradation. The regulation targets to prevent unsustainably produced commodities (palm oil, soy, cocoa, coffee, rubber, cattle, woods and their derivatives) from entering the EU market if they were produced after December 31, 2020 (“cut-off-date”). For that, EU-based companies can only import, export, or distribute EUDR-regulated products if they provide a due diligence statement confirming the products are deforestation-free, free from forest degradation, and comply with national laws. To verify (submit) the provided statements, competent authorities (operators) may choose to conduct individual analysis of forest conditions by using earth observation approaches such as assessing time-series satellite images and global forest maps. However, the verification and proving of deforestation-free production using global map products are data-driven decisions, which are subjected to technical limitations. Notably, the EUDR regulation does not estipulate any specific map as binding regulatory decision mechanism. This suggests that multiple global, regional or local forest maps may be used to support the verification of deforestation-free production. For this reason, we perform a review of publicly available global Forest/Non-Forest (FNF) and Land Use/Land Cover (LULC) maps and their capability to match the EUDR requirements regarding mapping traits. Although there are previous similar studies, they do not follow a systematic approach to the use of FNF and LULC as reference maps that aligns with the EUDR framework. Our objectives are to identify, collect, describe and evaluate publicly available global FNF and LULC reference layers on their capability to match the EUDR requirements based on two groups of relevant indicators: EUDR parameters (temporal proximity, spatial detail, forest cover definition) and technical parameters (reported accuracy metrics). To achieve our objectives, we first compile a comprehensive list of publicly available global FNF and LULC datasets and gather specific information for each dataset (steps 1 and 2). Following that, we assess the suitability of these datasets as potential EUDR-reference maps based on the EUDR parameters. This serves as the initial filtering stage, identifying datasets that do not meet most EUDR requirements (referred to as "filtered datasets I"). Next, we analyze the filtered datasets I against their reported accuracy values and further refine them based on additional criteria, resulting in a set of "shortlisted" datasets (filtered datasets II). We finalize our assessment by comparing the mapped forest areas from the shortlisted datasets against the Food and Agriculture Organization of the United Nations (FAO) reported forest estimates for 2020. We identify 21 global datasets, of FNF (11 datasets) and LULC (10 datasets), spanning from 1992 to 2024 and with spatial resolutions ranging from 1 to 300 meters, of which 6 global forest datasets are considered not suitable as reference maps for deforestation detection in the EUDR framework, based on the temporal proximity to the “cut-off-date” and adequate spatial resolution (EUDR parameters). Regarding the accuracy metrics, for most datasets, the ratio producer’s accuracy over user’s accuracy (PA/UA) is above one. A PA/UA ratio above one suggests a tendency to falsely classify non-forest areas as forest cover (false positives) leading to non-compliant production areas. Shortlisted datasets generally overestimate forest areas compared to FAO reports, especially in Central America, the Caribbean, North America, and Europe, with less variability observed in South America. The results underscore the matching capability of global FNF and LULC datasets to function as deforestation verification tools within the EUDR context based on different indicators and mapped forest area. The current study also highlights the predicted limitations of datasets based on the chosen indicators. It suggests that using multiple datasets at different scales is a better approach, as global datasets may fail to represent national definitions of forests or fragmented and heterogeneous landscapes, leading to false accusations of deforestation-free production violations or improper detection of existing deforestation. No single dataset is flawless; however, certain maps are better suited for the specific EUDR applications than others.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: Operational EO-based commodity crop mapping to support land-use regulation: the case of soy

Authors: Xiao-Peng Song, Matthew Hansen, Bernard Adusei, Jeffrey Pickering, Andre Lima, Peter Potapov, Yu Xin, Laixiang Sun, Stephen Stehman, Marcos Adami, Carlos Di Bella
Affiliations: University of Maryland, The State University of New York College of Environmental Science and Forestry, INPE, University of Buenos Aires
Commodity crop expansion is the main driver of tropical deforestation, which is a major cause of global climate change and biodiversity loss. Removing deforestation from the agricultural supply chains has been gaining momentum in academia, non-government organizations as well as the private sector. The Amazon Soy Moratorium in Brazil and the Palm Oil Moratorium in Indonesia are proved effective in reducing deforestation. Recently, the European Union (EU) has introduced the EU Regulation on Deforestation-free products (EUDR) with a wider scope and coverage. Soy is one of the seven enlisted commodities for EUDR, as soy is one of the key commodities driving deforestation in South America. Many satellite missions such as the Landsat and Sentinel 2 are providing consistent Earth Observations (EO) over the globe. However, the capability to convert EO data into high-quality crop maps over large areas and over time has been lacking. We have developed an end-to-end workflow for national-to-continental-scale commodity crop mapping using satellite data and statistical field survey. Major components of the workflow include satellite Analysis Ready Data (ARD) generation, machine learning application for crop classification, statistical sampling, in situ crop survey, crop area estimation and crop map validation. The method can generate internally consistent results of crop map with validation accuracy and crop area estimates with known uncertainty. Using soy as an example, we illustrate the production of soy maps at 30 m and 10 m resolutions in an annual operational mode with > 95% overall accuracy. The maps over South America can be viewed and downloaded at: https://glad.earthengine.app/view/south-america-soybean. We use the soy maps to quantify and analyze the crop expansion dynamics and associated natural vegetation loss at the biome and finer scales. In South America, soy area has been consistently growing in all major biomes including the Brazilian Amazon, Atlantic Forests, Cerrado, Chaco, Chiquitania and Pantanal. Soy-driven deforestation is concentrated at the active frontiers, nearly half located in the Brazilian Cerrado. Soy area will continue to grow due to persistent global demand. We integrate satellite-based land-use change maps with economic modeling to evaluate the effectiveness of supply chain policies. Our analysis suggests that existing forest conservation policies effectively limited soy-induced deforestation in the Brazilian Amazon.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.61/1.62)

Presentation: User Requirements From EU Member State Authorities for Verification of Due Diligence for EUDR

Authors: Dr. Sharon Gomez, Katja Berger, Christophe Sannier, Fabian Enßle, Martin Herold
Affiliations: GAF AG, GFZ Helmholtz Centre for Geosciences
As part of the European Green Deal strategy to reduce emissions, the European Union (EU) Deforestation Free Supply Chain Regulation (EUDR) came into force in June 2023. The new Regulation No° 2023/1115 requires companies exporting the commodities of cattle, cocoa, coffee, oil palm, rubber, soya and wood, to the EU to ensure that they are being produced in areas that are deforestation-free and free of forest degradation in case of wood, from December 2020 onwards. The ‘operators’ or ‘traders’ have the responsibility to submit Due Diligence declarations to show compliance with the different aspects of the Regulation which are wide-reaching. The Member States (MS) have designated Competent National Authorities (CNAs) who will then have to undertake the verification of these submissions. The ESA World AgroCommodities (WAC) project, which was initiated in September 2024, has the objective to support the MS with Earth Observation (EO) based tools, which can be used for the verification process. In this context, the Consortium engaged with several CNAs and undertook guided interviews to better understand both the legal and technical requirements for implementation of the Regulation. The feedback was compiled and presented to them in a Living Lab workshop to ensure completion. Some of the main topics that the Consortium requested feedback on were: overall requirements for EO tools; what are the commodities and related countries considered to be priorities to develop and test the use of EO tools; which are data/methods needed for plausibility checks on geolocation and areas of the reported commodity, which are data/methods are needed for verification of occurrence or not of deforestation, which frequency of updates are needed (yearly, monthly, Near Real Time) and what would be the optimal spatial resolution for deforestation / agricultural mapping? The outcome of the user requirements shows an overall need for a two-step verification process. First, a EO based system should be developed to rapidly identify high-risk areas/polygons. Second, there is a requirement for tools to support more detailed inspection-level work. Additionally, there was a request to monitor land use changes, especially deforestation, before and after the cutoff date of December 31, 2020. The system should also assess potential changes in forest cover. The tools should be all user-friendly and with the potential to be integrated into the EU Information system TRACES. Regarding forest degradation, it is crucial to detect the conversion of primary forests into plantations. Additionally, identifying signs of degradation, such as selective logging and clear-cutting, and differentiating between primary and secondary forest types is highly relevant. The time lag between the harvest of wood and regrowth would be important to be captured. CNA groups require regular updates and timely access to data. Efficient processing of large volumes of EO data is also essential. The preferred spatial resolution is between 4 and 10 meters, with higher resolutions for detailed inspections. The desired temporal resolution includes mainly quarterly, in some cases monthly, but also yearly updates. Another main outcome of the guided interviews was the compilation of priority countries and related commodities for the test and demonstration site selection. These will be presented as part of this paper. The development of the EO-based monitoring system in the project will follow an iterative approach, closely adhering to agile principles and involving CNAs in different development stages. This collaborative process will ensure continuous refinement and alignment with user requirements and the underlying policy framework, ultimately delivering an optimal EO-based solution. Before the technical approach is finalized a benchmarking of different state of the art EO solutions available will be undertaken. This will require the assessment of how the methods can meet the main criteria required by the CNAs which have been noted. Results of the benchmarking exercise will then lead to selection of technical approaches which will be demonstrated and validated. The collection of these user requirements from key CNAs in EU in the current project, provides one of the first consultative efforts to specifically obtain geo-spatial needs from the agencies who will ultimately be responsible for ensuring compliance with the EUDR.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Session: F.02.01 Harnessing the Power of Remote Sensing for Research and Development in Africa

Harnessing the power of remote sensing technology is instrumental in driving research and development initiatives across Africa. Remote sensing, particularly satellite imagery and big data analytics, provides a wealth of information crucial for understanding various aspects of the continent's environment, agriculture, and natural resources. This data-driven approach facilitates evidence-based decision-making in agriculture, land management, and resource conservation. Overall, remote sensing serves as a powerful tool for advancing research and development efforts in Africa, contributing to sustainable growth, environmental stewardship, and improved livelihoods across the continent. In this session, we aim to promote various initiatives fostering collaboration between African colleagues and those from other continents.

Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: User-Integrated National Scale Drought Modelling Framework in Kenya

Authors: Maximilian Schwarz, Gohar Ghazaryan, Dr. Tobias Landmann, Waswa Rose Malot, Tom Dienya
Affiliations: Remote Sensing Solutions Gmbh, Leibniz Centre for Agricultural Landscape Research (ZALF), icipe, Regional Centre for Mapping of Resources for Development, Ministry of Agriculture of Kenya
Accurate monitoring of agricultural systems is inevitable for humanity and sustainability, as it is key to alleviating most sustainable development goals, including hunger, biodiversity loss, and climate change. To close the yield gap, it is critical to understand how climate extremes and field management impact yield in a spatially explicit and scalable way. The increasing number of freely available Earth Observation (EO) data offers opportunities to monitor intra-seasonal changes in abiotic stressors in croplands accurately and frequently by tracking subtle changes in time series. Drought is set to increase globally in frequency and severity due to climate change according to the Intergovernmental Panel on Climate Change (IPCC). Both, drought frequency and severity, increased notably in the previous decades, while drought risk is amplified by numerous factors such as population growth or fragmented governance in water and resource management. Monitoring drought hazard and impact is highly critical due to the widespread effects of drought on various sectors of the agroecological system. One of the most vulnerable sectors impacted by drought is agriculture. Droughts are a significant threat as they can adversely impact food security by reducing agricultural production. For efficient drought management including understanding which agricultural sectors are most affected, comprehensive drought characterization and monitoring are essential. Especially in vulnerable areas, it is essential to continuously monitor droughts and affected communities where poverty, food insecurity, and income inequality can result in adverse conditions. In this context, Africa represents one of the regions with the largest impact of climate change on locations and communities globally. Within the scope of the ADM – Kenya (Integrated use of multisource remote sensing data for national scale agricultural drought monitoring in Kenya) project, Kenya was chosen as a representative country to implement and integrate drought monitoring activities. Food security and the economic sector in Kenya heavily rely on agricultural output, while the country has been struck by widespread drought in recent years. It is characterized by different ecological biomes ranging from humid to arid climatic conditions, making Kenya a challenging but also exemplary country in the African context for national scale drought monitoring. Despite recent advances in drought risk and impact modelling, there is still a lack of coherent and explicit information on drought hazard, vulnerability, and risk across larger areas. Producing spatially explicit information on drought hazard, vulnerability and risk have thus multiple challenges. Whereas global models do not allow for the characterization of regional drought events due to low spatial resolution, local and regional models are often not transferable to other countries or regions. To address this gap, we developed a spatially explicit drought hazard, vulnerability, and risk modelling framework for agricultural land, grass- and shrubland areas based on rainfall and vegetation index anomalies. The original model was initially developed in the USA, Zimbabwe and South Africa. While the original model was solely based on MODIS (Moderate-resolution Imaging Spectrometer) data, the model was further developed during the ADM – Kenya project to ensure the future sustainability of the framework. Therefore, the drought modelling framework was extended by incorporating Sentinel-3 data to provide reliable and accurate results moving forward. Through this extension, the developed modelling framework is now one of the first to take advantage of a 20+ year time-series of EO data combining MODIS and Sentinel-3. The newly developed drought modelling framework is now based on TAMSAT rainfall data (SPI3 – 3-monthly Standardized Precipitation index), MODIS and Sentinel-3 data (NDII – Normalized Difference Infrared Index, NDVI – Normalized Difference Vegetation Index, LST – Land Surface Temperature), and national yield statistics provided by the FAO (Food and Agriculture Organization). Due to the lack of in-situ data, the model results were successfully cross verified against global drought models like the GDO (Global Drought Observatory), FEWS NET data (Famine Early Warning Systems Network), and national drought reports. With this submission we want to present the results of an accurate and reliable drought modelling framework developed for future usage in Kenya as close collaborations with national incubators vastly improved performance. The drought modelling framework provides monthly drought probability maps on a national scale that can also be used in NRT (Near Real Time) applications and can be further integrated into EWS (Early Warning Systems). As drought impact and vulnerability can be reduced by the implementation of different management practices, it is furthermore important to provide monitoring tools that support decision making for sustainable management. To do so, a second product on Irrigation Systems mapping on a national scale was developed in close collaboration with the national incubators. The Sentinel-2 based product reflects irrigated and rainfed cropland with a 10m spatial resolution for Kenya. While this product stands alone to provide information on areas in need of support and intervention for political decision makers, it also feeds directly into the Sentinel-3 based drought modelling framework. With this submission we want to present a comprehensive drought modelling framework, that not only addresses drought hazard, but also incorporates farming practices and supports future drought impact mitigation strategies. To do so, all products were developed in close collaboration with national incubators in Kenya to ensure the future usage, uptake and integration of the proposed products. While advanced processing chains were developed during this project, detailed documentation and guidance were provided to users for a seamless uptake. Therefore, the presentation will also outline a small overview of how these freely available products and processing chains can be used and accessed by local incubators as it was already done in a user workshop in Kenya during the project.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Advancing Earth Observation in Africa : Mid-term achievements of the WG Africa Copernicus Training of trainers program in three languages

Authors: Linda Tomasini, Ali Arslan Nadir, Agnès Bégué, Nico Bonora, Catarina Duarte, Maria Daraio Girolamo, Jean-François Faure, Philippe Gisquet, Carlos Gonzales Inca, Eric Hallot, Michal Krupinski, Cécilia Leduc, Marc Leroy, Benoît Mertens, Benjamin Palmaerts, Marietta Papakonstantinou, Cristina Ponte Lira, Carolina Sa, Dimitra Tsoutsou
Affiliations: CNES, FMI, CIRAD, ISPRA, Air Centre, ASI, IRD, Visioterra, University of Turku, ISSEP, CBK PAN, IDGEO, Space4Dev, NOA, University of Lisbon, PT Space, PRAXI Network
The WG Africa project is a collaborative initiative involving 12 national institutions from 8 European countries. Its aim is to support and enhance the utilization of Copernicus Data and services in Africa through a "training of trainers" program, which is funded by the European Commission and implemented in three languages: French, Portuguese, and English. The primary goal is to assist African academic or private trainers in integrating Copernicus-based modules into their training programs or curricula. This initiative complements other capacity-building efforts in the sector of Earth Observation from space in Africa, such as GMES & Africa. The project has started in October 2022 and has completed its initial phase. During 2023, 30 future trainers originated from 18 African countries were selected and an extensive 10 weeks long training program on the use of Copernicus data and services was developed and delivered to them on line. Thematic training sessions on different topics such as agriculture, forests, disasters management, hydrology, health, mangrove monitoring were also organized and delivered by experts from different European institutions and research entities. The project has now entered its second phase in which each African Trainer has been developing and implementing his/her own local training sessions with the support of European partners. In 2024, 15 local training sessions addressing different thematics such as hydrology, land use land cover management were implemented by the African Trainers in 12 countries. In total, more than 1000 students, practitioners and decision makers were introduced and trained to the use of Copernicus data and services to the benefit of scientific studies, policy making or enforcement, and private initiatives in favor of Copernicus applications within the targeted countries With a relative small budget, this action has already proven to be effective thanks to the cooperation scheme and sharing of training contents among the European partners and also to the co-development approach with the African partners that has enabled a leveraging effect and multiplying factor in reaching out to new Copernicus users. A digital learning platform hosting training contents, lectures recordings, exercises and various training resources has also been developed to support the training activities and is a valuable asset showcasing Copernicus use cases in Africa. It has been opened to the GMES & Africa community. In addition to these activities that are related to training, the WG Africa is also organizing webinars all along the project to raise awareness and promote the use of Copernicus data and services to a wider audience in Africa. New local training sessions will take place in 2025 and the project will end in October. As continuity of activities and collaboration is essential to build sustainable Copernicus User Uptake in Africa, it is the author's intention to maintain the WG Africa network beyond the project and to apply to be part of the Copernicus Ambassadors network in Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Utilizing the Potential of Hyperspectral and Thermal EO Data for Drought and Crop Water Stress Monitoring in Africa – Results From the ARIES Project

Authors: Silke Migdall, Veronika Otto, Dr. Heike Bach, Jeroen Degerickx, Louis Snyders, Aolin Lia, Kaniska Mallick
Affiliations: Vista-Geo GmbH, VITO, LIST
ARIES is part of ESA’s EO AFRICA (African Framework for Research Innovation, Communities and Applications), an initiative that focuses on building African - European R&D partnerships and the facilitation of the sustainable adoption of Earth Observation (EO) and related space technology in Africa. The focus of the project was on exploring the potential of upcoming satellite missions (particular hyperspectral and thermal) to address water management and food security issues in Africa, as climate change is affecting many regions in Africa already. Some already drought-prone regions in the Sahel zone have become even drier, but also regions where water formerly was plentiful like Zambia have recently experienced drought conditions. Within ARIES, experimental EO analysis techniques from hyperspectral and thermal data have been developed and validated as the first step towards a new and deeper understanding of the crop water situation under different climatic and management conditions. The new ARIES products are (1) a thermal drought indicator derived from ECOSTRESS data, (2) a high-resolution crop water stress indicator from a combination of Sentinel-3, Sentinel-2 and ECOSTRESS data, and (3) high-resolution leaf area, canopy and leaf water content from hyperspectral data (PRISMA, EnMAP). In terms of scientific advancements in the field of thermal remote sensing, we have developed a new approach to derive information on drought using the concept of thermal inertia and have gained a better understanding on the importance of properly accounting for directionality effects. ARIES worked together with Early Adopters covering west (AGRHYMET Regional Centre and AAH Action Against Hunger in Niger) and southern (Zambian Agricultural Knowledge and Training Centre LTD in Zambia) Africa. Thereby, the developed algorithms and approaches could be validated, tested and evaluated in different geographic regions. The newly developed products show clear potential in supporting farmers in optimizing water productivity at individual field scale. To make sure that the algorithms and results can be further utilized, they have been integrated as processors on a cloud platform: the Food Security Explorer, which also hosts the ECOSTRESS data. After in depth discussion with the Early Adopters and other stakeholders as well as the compilation of a policy matrix, possible policy impacts of the EO analyses were deducted. Potentials, limitations and recommendations for the Copernicus Expansion missions CHIME and LSTM were analysed. In this presentation, the most prominent results of the activities within ARIES will be shown and the insights from the project towards the planning of future missions will be shared. The project ARIES is one of the EOAfrica Explorers and is funded by ESA under ESA Contract No: 4000139191/22/I-DT.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Harnessing Remote Sensing for Mangrove Mapping and Restoration in Support of Protected Area Management in West Africa

Authors: Celio De Sousa, Dr Lola Fatoyinbo, Abigail Barenblitt, Dr Adia Bey, Dr Neha Hunka
Affiliations: Nasa Goddard Space Flight Center, University of Maryland Baltimore County, University of Maryland
Despite their well-documented ecological, economic, and social benefits, mangroves continue to experience alarming rates of degradation and destruction, with global losses of 1-2% per year—rates that surpass those of terrestrial tropical forests. In West and Central Africa, mangroves cover approximately 11% of the world’s mangrove area and serve as a vital global carbon sink. Their protection is critical in the context of climate change, yet conservation efforts face persistent challenges. Coastal conservation projects within Marine Protected Areas (MPAs) in the region struggle with insufficient local funding, relying heavily on international funding sources—a model that is unsustainable in the long term. Consequently, identifying durable, locally driven funding solutions for these protected areas has become a pressing priority. To address these challenges, this study explores the use of remote sensing technology to support mangrove mapping, restoration, and the effective management of MPAs. Our approach integrates advanced satellite-based mapping and monitoring techniques to assess the potential for initiating blue carbon projects while improving regional cooperation for climate change mitigation and adaptation. Using a Landsat-based compositing approach (LandTrendr) combined with machine learning classifiers, we developed annual land cover maps spanning from 2000 to 2022 for approximately 275,000 km² of coastline, covering more than 235 MPAs between Mauritania and the Democratic Republic of Congo. This analysis identified eight key land cover classes, including mangrove forests. The results provide a detailed understanding of annual trends in land cover and mangrove extent, offering critical insights for prioritizing restoration areas and identifying key MPAs where carbon-financed projects could be developed. As an example, we focused in Guinea-Bissau, where mangroves are one of the main land cover classes within protected areas. We found that some protected areas promoted the restoration of mangroves over time. These findings underscore the potential of remote sensing to guide sustainable mangrove conservation and restoration initiatives, strengthen protected area management, and contribute to regional climate adaptation and mitigation efforts.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Harnessing remote sensing for monitoring turbidity dynamics in small reservoirs to inform agriculture and aquaculture development in sub-Saharan Africa

Authors: Stefanie Steinbach, Anna Bartels, Jun.-Prof. Dr. Valerie Graw, Jun. Prof. Dr. Andreas Rienow, Dr. Bartholomew Thiong'o Kuria, Dr. Sander Zwart, Prof. Dr. Andrew Nelson
Affiliations: Institute of Geography, Ruhr University Bochum, International Institute for Geo-Information Science and Earth Observation (ITC), University of Twente, Institute of Geomatics, GIS and Remote Sensing, Dedan Kimathi University of Technology, International Water Management Institute
The widespread construction of small dams across sub-Saharan Africa, driven by their affordability and simplicity, has led to the establishment of thousands of small reservoirs. These reservoirs are crucial for supporting smallholder agriculture, farmer-led irrigation, fishing and aquaculture, playing a significant role in rural livelihoods. Sustained water supply during periods of scarcity contributes to the resilience of rural populations in times of climate change and variability. However, water quality challenges including turbidity fluctuations make small reservoir performance vary strongly. Turbidity, a measure of water clarity, indicates soil erosion and pollution from human activities like farming. Elevated turbidity levels can severely impact aquaculture by increasing fish mortality rates, making effective turbidity monitoring crucial. However, monitoring turbidity in small reservoirs and understanding its underlying drivers remain significant challenges. Seasonal turbidity monitoring, in particular, is essential for the successful use of small reservoirs in irrigation and aquaculture. Yet, the lack of such data, the low temporal resolution of existing datasets, and the high costs of continuous measurement contribute to the high risk of failure in irrigation and aquaculture projects. Remote sensing-based turbidity monitoring, if successfully implemented, provides a cost-effective means to track turbidity dynamics across large areas and numerous reservoirs, thereby serving as a valuable tool to mitigate risks and inform sustainable investments. Building on this potential, this study investigates the applicability of remote sensing and machine learning-based approaches for estimating turbidity and its influencing factors, developed using data from Kenya, and evaluates their transferability to a study site in northern Ghana. Sentinel-2 time series iamges were processed with the Case 2 Regional Coast Colour (C2RCC) processor in ESA SNAP, calibrated against water samples collected during the rainy-to-dry season—in 10 reservoirs in the central highlands of Kenya in January 2023 and 15 in northern Ghana in October 2024. The findings highlight contrasting turbidity patterns both within and across the study sites, offering insights into localized and regional factors influencing turbidity. Elevated turbidity levels, frequently or even permanently exceeding safe thresholds during the observation period from 2017 to 2023, were observed in the northern part of the Kenyan study site and across the Ghana study site. Turbidity dynamics were linked to factors such as land management, meteorology, and topography with varying degrees of influence. Our results highlight the complex interactions driving water quality in small reservoirs and demonstrate the capacity and applicability of scalable, site-specific turbidity monitoring using openly accessible tools and remote sensing data. Our methodology provides critical insights for evaluating the suitability of sites for small reservoirs for irrigation agriculture and aquaculture and their potential future development. By enhancing water resource management with continuous turbidity monitoring, this work can support the improved planning and success of small reservoir initiatives, ultimately contributing to agricultural productivity and rural livelihoods in sub-Saharan Africa.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 1.14)

Presentation: Facilitating African-European R&D Partnership in Earth Observation Through Collaborative Research: EO AFRICA R&D Research Calls

Authors: Dr. Serkan Girgin, Dr. Mahdi Farnaghi, Dr. Diana Chavarro Rincon, Dr. Zoltan Vekerdy
Affiliations: University of Twente, Hungarian University of Agriculture and Life Sciences
The EO AFRICA R&D Facility is the flagship of the EO AFRICA initiative that aims to facilitate the sustainable adoption of EO and related space technology in Africa through an African-European R&D partnership. For this purpose, the Facility supports capacity development for research by organizing tailor-made domain-specific training courses and webinars, and capacity development through research by enabling research projects co-developed and run by African and European research tandems. During its first phase in 2020-2023, the Facility launched two research project calls to support African-European collaborative efforts in developing innovative, open source EO algorithms and applications adapted to African solutions to African challenges by using cutting-edge cloud-based data access and computing infrastructure. The calls aimed at addressing emerging research topics in food security and water scarcity, making full use of the digital transformation in Africa and the observation capabilities of the ESA and Third Party EO missions. More than 100 project proposals were submitted by African and European co-investigators affiliated with public and private research institutions in 29 African and 17 European countries and covering a wide range of topics such as crop monitoring, yield forecasting, climate change, flood mapping, livestock mapping, soil monitoring, lake monitoring, biodiversity, etc. Following an exhaustive peer-review and evaluation process considering 33 criteria grouped under 6 categories, including qualifications of the project team, scientific quality of the proposed work, innovation and impact potential, use of EO data, use of cloud-based ICT infrastructure, and budget, the Facility provided financial support and ICT infrastructure to 30 research projects. Each project developed an innovative algorithm or research workflow delivered as open-source research code, preferably as interactive notebooks, together with open-access research data. The results of the research projects are also published as open-access scientific publications. In this talk, first, the details of the research calls will be described starting for the preparation of the call, up to the closure of the research projects, with a special emphasis on the evaluation process. Then, an overview of the submitted proposals will be provided including research questions, EO data and analysis methods, work and budget distribution, geographical distribution, and gender balance. The results of the funded projects will be summarized and finally, the lessons learned from the research calls will be discussed in detail, including challenges in using cloud computing infrastructure, performing collaborative research as tandems, budget utilization, and Open Science practices.
Add to Google Calendar

Tuesday 24 June 14:00 - 15:30 (Room 0.14)

Session: A.08.13 Multiple stressors on Ocean Health and Marine Biodiversity: Lessons Learned and Path Forward

Ocean plays a critical role in mitigating climate change by absorbing heat and human-induced CO2 emissions, also providing essential ecosystem services that support human well-being. With the rising of anthropogenic effects, the ocean is warming, acidifying and losing its oxygen content. In addition, pollution and overfishing further modify the biological environment. Intensity and frequency of extreme conditions has been consequently rising, posing a significant threat to marine ecosystems.
Cumulative stressors affect a wide range of ecosystem services, operating across multiple scales from cellular-level physiological responses to broader community dynamics. Interactions between stressors are complex and their cumulative effects are not always additive, leading to non-linear, synergistic, or antagonistic outcomes. Thus, predicting the combined impact of stressors on marine ecosystems remains a significant challenge.
Recent advancements in integrating scientific approaches, such as in-situ observations, Earth Observation, numerical modelling, and Artificial Intelligence, have enhanced our understanding of cumulative impacts of these stressors on marine biodiversity and ecosystem services. However, a comprehensive understanding of how marine ecosystems respond to multiple stressors is still lacking. This uncertainty hinders efforts to accurately assess marine environmental status and ocean health.

Our goal is to bring together experts to collaborate on current knowledge and addressing future challenges. Specifically, we aim to:
• Identify gaps in knowledge, observation, technology, and methodology that need to be addressed to improve monitoring and assessment of ocean health and marine biodiversity.
• Pinpoint primary stressors that require detection and monitoring, and explore how EO-based techniques can support their identification.
• Strengthen our understanding of mechanistic links between physical, biogeochemical, and biological processes affecting marine biodiversity, improving predictive capabilities for future ocean health scenarios.

By working in collaboration, we can enhance our ability to monitor, understand, and define mitigation strategies for the impacts of multiple stressors on the health of the ocean and its ecosystems.

Chairs:


  • Federico Falcini - Institute of Marine Sciences, National Research Council of Italy, Rome, Italy
  • Angela Landolfi - Institute of Marine Sciences, National Research Council of Italy, Rome, Italy
  • Victor Martinez Vicente, Earth Observation Science and Applications, Plymouth Marine Laboratory, Plymouth, United Kingdom

Speakers:


  • Bror F. Jönsson - Ocean Processes Analysis Laboratory, University of New Hampshire
  • Yolanda Sagarminaga - AZTI Marine Research, Basque Research and Technology Alliance
  • Branimir Radun - Oikon Ltd., Institute of Applied Ecology
  • Laura Zoffoli - Institute of Marine Sciences, National Research Council of Italy

Add to Google Calendar

Tuesday 24 June 14:15 - 14:35 (EO Arena)

Demo: D.03.32 DEMO - NASA-ESA-JAXA EO Dashboard

{tag_str}

This demonstration will demonstrate the features of the NASA-ESA-JAXA EO Dashboard. It will covert following elements:
- Dashboard exploration - discovering datasets, using the data exploration tools
- Browsing interactive stories and discovering scientific insights
- Discovering Notebooks in the stories and how to execute them
- Creating new stories using the story-editor tool
- Browsing the EO Dashboard STAC catalogue
- Exploring the documentation


The demo will be performed by ESA, NASA and JAXA joint development team.
Add to Google Calendar

Tuesday 24 June 14:30 - 15:30 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Human & Robotic Exploration, Space Transportation

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Chairs:


  • Christian Fidi - TTTech
  • Georg Grabmayr - Beyond Gravity
Add to Google Calendar

Tuesday 24 June 14:37 - 14:57 (EO Arena)

Demo: D.02.26 DEMO - Putting the A.I. in F.A.I.R.: Unlocking Reproducible Machine Learning through openEO

The integration of Machine Learning (ML) and Deep Learning (DL) in Remote Sensing has revolutionized the way a vast amount of Earth Observation (EO) data is processed and analyzed. These advanced computational techniques have not only enabled faster and more efficient data processing but have also significantly improved the accuracy and scalability of insights derived from remote sensing data. ML-driven approaches are particularly valuable for scene classification, object detection, and segmentation applications, where identifying complex spatial patterns and subtle variations is critical.
To make ML more accessible for EO practitioners, openEO integrates key algorithms such as Random Forest, a widely used classification model known for its robustness and accuracy. This method enhances EO data classification by combining predictions from multiple decision trees, reducing the need for deep ML expertise. Additionally, the growing demand for more sophisticated ML techniques has led to the adoption of foundation models pre-trained on massive datasets and fine-tuned for EO applications. These models enable more generic, scalable, and automated classification pipelines without sacrificing precision.
This demonstration will showcase real-world mapping projects that have successfully implemented ML-powered classification workflows using openEO. Attendees will gain insights into how foundation models are being integrated to push the boundaries of EO analysis, offering new possibilities for large scale and automated geospatial data processing.

Speakers:


  • Victor Verhaert - VITO
Add to Google Calendar

Tuesday 24 June 15:00 - 15:20 (EO Arena)

Demo: D.03.26 DEMO - The Geo-Quest mobile application: Easy and accurate Earth Observation-enhanced ground data collection

Geo-Quest is a mobile app for the collection of high-quality reference data on the ground. Such reference data are needed to improve AI models and remote-sensing applications. The quests in the Geo-Quest app are campaigns related to different themes, where the common denominator is the collection of information on the ground. The app combines information coming from different sensors in the phone such as the camera, the GPS receiver, the gyroscope, the accelerometer, etc., to support the user’s ground data collection, including augmented reality. Examples of current quests include Crop Capture, Tree-Quest, and Forest-Quest. Crop Capture allows users to record agricultural information such as the crop type and location, including parcel delineation over satellite imagery, and can be supplemented by pictures taken in the field being surveyed. Some of the data from Crop Capture will support the ESA-funded WorldCereal project, which provides very-high resolution global crop maps. Tree-Quest allows users to collect measurements of individual tree attributes such as the tree diameter, tree height, and tree species, which is then used to derive above ground biomass while Forest-Quest is used to measure the basal area of a forest plot. Other features of Geo-Quest include the possibility to store satellite imagery in areas where the internet is not available. Measurements can be made on the ground and then uploaded when the user is online, which are then made available to the community through the quest’s web platform. Thus, all the information collected in Geo-Quest is openly available for anyone to use.

This demonstration will allow users to download the application and test the available quests. It will include a slide presentation and a Q&A session, followed by hands-on testing of the app on-site. A video showcasing the capabilities of the app will also be running in the background.

Speaker:


  • Juan Carlos - IIASA

Add to Google Calendar

Tuesday 24 June 15:22 - 15:42 (EO Arena)

Demo: D.03.35 DEMO - Introducing EarthCODE

The objective of this brief demonstration is to introduce EarthCODE.

The Open Science and Innovation Vision included in ESA’s EO Science Strategy (2024) addresses 8 key elements: 1) openness of research data, 2) open-source scientific code, 3) open access papers with data and code; 4) standards-based publication and discovery of scientific experiments, 5) scientific workflows reproducible on various infrastructures, 6) access to education on open science, 7) community practice of open science; and 8) EO business models built on open-source. EarthCODE (https://earthcode.esa.int) is a strategic ESA EO initiative to support the implementation of this vision.

EarthCODE (Earth Science Collaborative Open Development Environment) will form part of the next generation of cloud-based geospatial services, aiming towards an integrated, cloud-based, user-centric development environment for European Space Agency’s (ESA) Earth science activities. EarthCODE looks to maximise long-term visibility, reuse and reproducibility of the research outputs of such projects, by leveraging FAIR and open science principles and enabling, thus fostering a sustainable scientific process. EarthCODE proposes a flexible and scalable architecture developed with interoperable open-source blocks, with a long-term vision evolving by incrementally integrating industrially provided services from a portfolio of the Network of Resources.?

During this 20 minute demo, we will cover how collaboration and Federation are at the heart of EarthCODE. As EarthCODE evolves we expect providing solutions allowing federation of data and processing. EarthCODE has ambition to deliver a model for a Collaborative Open Development Environment for Earth system science, where researchers can leverage the power of the wide range of EO platform services available to conduct their science, while also making use of FAIR Open Science tools to manage data, code and documentation, create end-to-end reproducible workflows on platforms, and have the opportunity to discover, use, reuse, modify and build upon the research of others in a fair and safe way.

Speakers:


  • Samardzhiev Deyan - Lampata
  • Dobrowolska Ewelina Agnieszka - Serco
  • Anne Fouilloux - Simula Labs
Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (ESA Agora)

Session: C.01.29 Crafting the European Earth Observation Ecosystem 2040+: Needs, Offers, Gaps leading to ideas for a future EO Ecosystem architecture

What should our European Earth Observation Ecosystem look like in 2040+?
Which future users’ needs and societal challenges will drive the system-of-systems?
Which components of the ecosystem will be the game-changer?
Which key characteristics are essential?
The European EO Ecosystem 2040+ (“The European Blueprint for Earth Observation”) is a cross-cutting vision for the future of EO in Europe. It will help to join common forces from the various EO actors (science, commercial and operational nature) and highlight future needs for scientific research and development, innovative new EO mission ideas and technologies, and mission data exploitation with applications that address new Earth system science and deliver societal benefits.
This agora is to identify and discuss actions - in support of European citizens and policies - to implement and sustain, operate, and evolve the performance and capacity of Earth Observation in Europe as the most advanced living systems-of-systems in the world.
The vision of a European EO Ecosystem is thereby founded on a critical assessment for optimised, sustainable and affordable growth. This is achieved using a scenarios-based approach to consider potential evolution in the 2040+ timeframe (e.g. business-as-usual, enhanced continuity and optimised reduction), while at the same time identify key drivers and benchmark tools for a sustainable and unique European Ecosystem 2040+.
We will identify the key characteristics of the European EO Ecosystem as an adaptable approach including elements such as long-term data preservation, complementarity, interconnected, standards-based, verification, performance, modularity and scalability, reusability, best practices, affordability to name some.

Panel discussion with:


Connecting the dots between science needs and the EO Ecosystem


  • Craig Donlon - ESA

Green solutions, actions and policies


  • Inge Jonckheere - ESA

Future science needs


  • Markus Rapp - ACEO member and speaker of the DLR Earth Observation research institutes

A commercial perspective


  • Representative from industry

Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (Nexus Agora)

Session: F.02.11 Enhancing Earth Observation Uptake in the Philippines and ASEAN Region

Southeast Asia, including the Philippines, Indonesia, and Thailand, is among the most disaster-prone areas globally, severely affected by tropical typhoons, flooding, volcanic activities, and other climate change impacts. A leading reinsurance company recently identified the Philippines as highly exposed to significant economic losses due to disasters (as % of GDP). In the face of these shared challenges, timely decision-making, environmental monitoring, and effective policy implementation are important for building resilience across the region.
Jointly with the Directorate General for International Partnerships (DG-INTPA) and the Philippines government, the European Space Agency (ESA) has set up the National Copernicus Capacity Support Action Programme for the Philippines, known as CopPhil. The national CopPhil centre, hosted by the Philippine Space Agency (PhilSA), was inaugurated in October 2024, providing access to the complete Sentinel data of the European Copernicus Programme and is co-designing together with mandated institutions of the

Philippines government three EO services:
• Ground Motion Monitoring: Utilising InSAR to monitor landslides, earthquakes, ground movement, and volcanoes, enhancing disaster preparedness and mitigation strategies.
• Land Monitoring, Forests, and Crop Mapping: Monitoring Forest extent, types, health, and deforestation, as well as mapping high-value crops and land use changes to support sustainable land management and agricultural productivity.
• Benthic Habitat Monitoring: Mapping coastal ecosystems and detecting coral bleaching events to protect marine biodiversity and support fisheries management.

Building on CopPhil's success and recognising shared regional challenges, the EU-ASEAN Sustainable Connectivity Package (SCOPE) Digital initiative aims to adapt, transfer, and scale these solutions. SCOPE Digital focuses on Indonesia and Thailand as pilot countries, partnering with the National Research and Innovation Agency (BRIN) and the Geo-Informatics and Space Technology Development Agency (GISTDA) respectively. This regional expansion leverages the CopPhil experiences and tools to enhance EO data processing and digital connectivity, promoting sustainable solutions to environmental and economic challenges across ASEAN.

Moderator:


  • Casper Fibæk - ESA, Earth Observation Application Specialist

Speakers:


  • Gay Jane Perez - Deputy Director General - Philippines Space Agency (PhilSA)
  • Kandasri Limpakom - Deputy Executive Director, GISTDA
  • Rokhis Khomarudin - Head of the Geoinformatics Research Center, BRIN
  • Thibault Valentin - Programme Responsible, DG-INTPA
  • Eric Quincieu - Principal Water Resources Specialist, ADB
  • Ariel Blanco - Director for Space Information, Philippine Space Agency, and Professor, University of the Philippines Diliman
Add to Google Calendar

Tuesday 24 June 15:30 - 16:15 (Frontiers Agora)

Session: F.04.26 Towards Operational Greenhouse Gas Monitoring for Policy

The Committee on Earth Observation Satellites, CEOS, and the Coordination Group on Meteorological Satellites, CGMS, have demonstrated that high-quality, systematic satellite observations of atmospheric carbon dioxide (CO2) and methane (CH4) are essential for building a truly integrated global greenhouse gas (GHG) monitoring system. These observations are fundamental for ensuring data accuracy, tracking collective climate progress, and supporting the Enhanced Transparency Framework under the Paris Agreement.

Their commitment to sustaining long-term monitoring of greenhouse gases is clearly reflected in the recently updated Greenhouse Gas (GHG) Roadmap. This updated Roadmap aims to further support the Paris Agreement’s Global Stocktakes by integrating key lessons learned from the first Global Stocktake and leveraging recent advancements in satellite infrastructure and data processing capabilities.

The Roadmap emphasizes enhanced engagement and co-development with stakeholders and stronger partnership with key organizations like the World Meteorological Organization’s Global Greenhouse Gas Watch (WMO G3W) and the United Nations Environment Programme’s International Methane Emissions Observatory (UNEP IMEO). It also provides an overview of the space-based greenhouse gas observing architecture, capable of delivering GHG emissions information at global, regional, and facility scales through both public and non-governmental missions.

Additionally, it outlines the efforts needed to transition the current framework from research to operations in support of sustained and operational GHG Monitoring and Verification Support systems that serve stakeholders across science, inventory, policy, and regulatory communities.

In this Agora session, we will engage with international and European stakeholders and discuss how we will move towards operational greenhouse gas monitoring providing policy-relevant and actionable information.

Speakers:


  • Yasjka Meijer - ESA
  • Gianpaolo Balsamo - WMO-G3W
  • Itziar Irakulis Loitxate - UNEP-IMEO
  • Mark Dowell - JRC
  • Tomohiro Oda (USRA)

Add to Google Calendar

Tuesday 24 June 15:30 - 16:45 (Plenary)

Session: Outlook for ESA's Earth Observation programmes - CM25

This session will focus on the outlook for ESA’s Earth Observation Programmes as regards what is planned to be proposed to Member States for funding at the next Ministerial Council in November. The general context of the Ministerial Council, including the overall package of programmes for the Agency, will be described by the ESA Director General. More detailed information on the EO programmes and initiatives which will be open for Member State subscription will be given by the Director of ESA’s Earth Observation Programmes. These programmes will seek to ensure support for the development of future scientific, institutional, and commercial missions as well as support for the exploitation of the satellite data collected by past and current missions. The session will end with some views expressed on ESA’s plans by representatives of the scientific and commercial communities. This session complements others held during the Symposium which focus on individual ESA programmes, missions and initiatives as well as the longer term strategy of ESA, particularly as regards Earth Science.

Speakers:


  • Josef Aschbacher - Director General, ESA
  • Simonetta Cheli - Director of Earth Observation Programmes, ESA
  • Andrew Shepherd - Head of the Department of Geography and Environment at Northumbria
  • Charles Galland - Policy Manager, ASD-Eurospace
Add to Google Calendar

Tuesday 24 June 15:45 - 16:05 (EO Arena)

Demo: C.06.17 DEMO - Pi-MEP: A Comprehensive Platform for Satellite Sea Surface Salinity Validation and Analysis

The Pilot-Mission Exploitation Platform (Pi-MEP) for salinity (https://www.salinity-pimep.org/) provides a powerful web-based environment for validating and analyzing satellite-derived sea surface salinity (SSS) data. Originally developed in 2017 to support ESA's Soil Moisture and Ocean Salinity (SMOS) mission, Pi-MEP has evolved into a comprehensive reference platform serving multiple satellite missions including SMOS, Aquarius, and SMAP.

Pi-MEP addresses three core functions essential for oceanographic applications:
1- Centralizing diverse datasets required for satellite SSS validation
2- Generating systematic comparison metrics to monitor SSS product quality
3- Providing intuitive visualization tools for exploring both SSS data and validation results

The platform integrates extensive in situ measurements from Argo floats, drifters, thermosalinographs, and saildrones, alongside complementary datasets for precipitation, sea surface temperature, and ocean currents. Users can access pre-generated validation reports covering 30 predefined oceanic regions through the platform's intuitive web interface.

Through an ESA-NASA partnership established in 2019, Pi-MEP has undergone significant enhancements, including implementation of triple-collocation analysis, advanced match-up criteria, and integration of data from field campaigns like SPURS, EUREC4A, and SASSIE.

Our demonstration will showcase Pi-MEP's latest capabilities and user interface, highlighting new tools for characterizing representation errors across satellite salinity products. Attendees will see how oceanographers can efficiently access, validate, and analyze SSS data for applications ranging from river plume monitoring to mesoscale boundary current dynamics and salinity evolution in challenging regions.

Speaker:


  • Sebastien Guimbard - OceanScope
Add to Google Calendar

Tuesday 24 June 16:07 - 16:27 (EO Arena)

Demo: D.04.28 DEMO - Exploring Copernicus Sentinel Data in the New EOPF-Zarr Format

{tag_str}

Overview:
This demonstration will showcase the Earth Observation Processing Framework (EOPF) Sample Service and the newly adopted cloud-native EOPF-Zarr format for Copernicus Sentinel data. As ESA transitions from the SAFE format to the more scalable and interoperable Zarr format, this session will highlight how users can efficiently access, analyze, and process Sentinel data using modern cloud-based tools.

Objective:
Attendees will gain insight into:
- The key features of the Zarr format and its advantages for cloud-based workflows.
- How the transition to EOPF-Zarr enhances scalability and interoperability.
- Accessing and exploring Sentinel data via the STAC API and S3 API.
- Using Jupyter Notebooks for interactive data exploration and analysis.
- Running scalable Earth observation workflows on cloud platforms.

Interactive Discussion & Feedback:
Following the demonstration, there will be a dedicated time for discussion and feedback. Attendees can share their experiences, ask questions, and provide valuable input on the usability and future development of the EOPF-Zarr format. This is a great opportunity to learn about next steps in the transition process, future developments, and how to integrate EOPF-Zarr into your own workflows.

Join us to explore how EOPF-Zarr is changing access to Copernicus Sentinel data and enabling scalable Earth observation workflows, and contribute your thoughts on shaping the next phase of this transformative technology!
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E1)

Session: C.03.07 The Copernicus Sentinel Expansion missions development: status and challenges - PART 2

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch and operations) and together with industrial/science partners the status of activities related to Mission developments will be provided.

Presentations and speakers:


CRISTAL general status presentation


  • Kristof Gantois

CRISTAL instrument and mission E2E performance


  • Frank Borde
  • Paolo Cipollini

ROSE-L Mission and Project status


  • Gianluigi Di Cosimo
  • Malcolm Davidson

ROSE-L SAR Instrument


  • Nico Gebert

CIMR Mission and Project status


  • Craig Donlon

CIMR Spacecraft & Instrument


  • Mariel Triggianese
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Session: B.04.05 Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters - PART 2

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Assessing in situ national drought monitoring services in Central Europe against satellite-based drought indicators and a new drought impact database

Authors: Nirajan Luintel, Piet Emanuel Bueechi, Pavan Muguda Sanjeevamurthy, Wolfgang Preimesberger, Wouter Dorigo
Affiliations: TU Wien
Droughts have severe impacts on the environment and economy, particularly in regions with high water demand and low annual precipitation. Central Europe is one of such regions, where droughts reportedly have led to losses in crop yield and biodiversity, to disruptions in water transport, to shortages of drinking water, among others. To mitigate these impacts, national weather and environmental agencies in the region have developed national drought monitoring tools. Most drought monitoring products (such as Standardized Precipitation Index, Standardized Precipitation Evapotranspiration Index) are based on weather observation stations. However, these stations are not homogeneously distributed. Alternatively, satellite remote sensing allows for monitoring droughts contiguously over large areas. Satellite-borne sensors provide data on precipitation, vegetation condition, evapotranspiration, and soil moisture, all of which are useful for drought monitoring. Among these, soil moisture-based drought indicators are particularly valuable, as soil moisture is estimated with all-weather satellite sensors and soil moisture is a good indicator of plant water availability. However, the performance of these satellite-based drought indicators should be evaluated before integrating them into existing drought monitoring systems. This study provides a quantitative assessment of national drought monitoring products and satellite-based standardized soil moisture indices derived from the new disaggregated ASCAT surface soil moisture product and ESA-CCI v09.1 gap-filled soil moisture product, by comparing them with a novel impact database developed for the region within Clim4Cast project [1]. The database synthesizes impacts of drought, heatwave and forest fire on various sectors (agriculture, hydrology, household water supply, economy and technology, wildlife, soil among others) reported in national newspapers published between 2000 and 2023. We assess the drought indicators on two fronts: their ability to capture severity of the drought and their ability to detect drought. First, for each reported drought event, we correlate the drought severity with the number of reported impacts in the database. Drought severity is defined as the drought indicator values during the drought event (when the values remain below the given threshold, set at -1 in this study) accumulated over time. The correlation value shows how well the drought indicator captures the severity of a drought. Second, the timing of drought impact reporting in the impact database is to evaluate its ability to detect observed impacts. This evaluation is performed using the area-under-the-receiver-operating characteristics curve (ROC-AUC), which is the plot of true positive rate against false positive rate at various classification thresholds of drought definition. The AUC value reveals how well the reported drought events are detected by the drought indicator. Our results show differences among drought indicators in their ability to detect drought signals (AUC values) and their ability to capture the severity of impacts observed (correlation values). Some drought indicators are better at detecting the occurrence of drought, while others are better at capturing the severity of the drought. In some regions, the drought indicators from national monitoring systems outperform those from satellite products, while in some regions, the reverse is true. Furthermore, regardless of the drought indicator chosen, the geographical characteristics of a region—such as complex terrain—pose challenges to effective drought monitoring. [1] This work is supported by Interreg Central Europe and the European Union in the framework of the project Clim4Cast (grant number CE0100059).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Landslide Hunter: a fully automated EO platform for rapid mapping of landslides in semi-cloudy conditions

Authors: Dr. Serkan Girgin, Dr. Ali Özbakır, Dr. Hakan Tanyaş
Affiliations: University of Twente
Landslides are a common natural hazard mostly triggered by seismic, climatic, or anthropogenic factors. The impacts of landslides on the nature, the built environment, and the society call for effective hazard management to improve our preparedness and resilience. Accurate landslide risk analysis methods are necessary to identify the elements at risk, and effective early warning systems are needed to prevent loss of life and economic damage. Landslide catalogs provide valuable information on past events that can be exploited for better hazard assessment and early warning. However, creating a landslide catalog is a time-consuming process, especially after major disasters. Several semi-automated landslide mapping methods using cloud-free optical satellite images have been developed recently that benefit from the advancements in image processing and AI technologies. However, such methods are mostly tested only in specific study areas, and it is uncertain if they can respond to the analysis needs globally. Compiling cloud-free images by combining many semi-cloudy images also requires significant time. Additionally, most landslides occur in mountainous regions, which are typically characterized by heavy rainfall patterns. As a result, finding cloud-free images that cover these areas in their entirety is quite difficult. The Landslide Hunter is a prototype online platform designed to rapidly detect landslides using an innovative method, which analyzes consecutive partially cloudy optical Earth observation (EO) images to identify visible landslide extents and then automatically integrate these partial extents to determine the complete extent of the landslides. The platform continuously monitors online resources for events capable of triggering landslides (e.g., major earthquakes), pinpoints regions where landslides are likely to have occurred following such events, and initiates the collection of EO data for these identified areas from public EO data portals. Whenever a new image becomes available, it is downloaded and processed automatically to detect landslide areas. Proximity to cloudy regions is used to determine if a landslide is partially visible or not, and partial extents are marked for further tracking. By combining information from successive analyses, the full extents of landslides are determined. This allows timely first detection of landslides and their effective monitoring under cloudy conditions. The platform allows the integration of various models for landslide detection, ranging from simple index-based approaches (e.g., NDVI) to advanced machine learning and deep learning techniques utilizing image segmentation. The results are published in an open-access landslide catalog, available through a user-friendly web portal for individuals and a REST API for machine access. This catalog is continuously updated and offers faster updates compared to any existing conventional catalog. The platform enables stakeholders, such as researchers, public authorities, and international organizations, to receive notifications when new landslides are detected in their areas of interest. In addition to supporting and expediting rapid damage assessment efforts, the data provided can contribute to landslide prediction initiatives, ultimately enhancing the safety of communities and the built environment. This presentation will offer an in-depth exploration of the design principles and operational framework of the Landslide Hunter platform. It will cover the platform's core features, functional capabilities, and user interface, along with a comprehensive overview of the data access methods designed to enhance interoperability and seamless integration with other systems. Furthermore, a live demonstration of the operational platform will highlight its practical applications and effectiveness. The demonstration will showcase how the platform enables the automatic identification and tracking of landslides without relying on cloud-free optical satellite imagery and how it facilitates near real-time monitoring of landslide evolution, contributing to the global mapping and cataloging of such events.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Spatio-temporal Extrapolation of Time-series Data with Deep Learning

Authors: Lea Schollerer, Prof. Dr. Christian Geiß, Patrick Aravena Pelizari, Dr. Yue Zhu, Prof. Dr. Hannes Taubenböck
Affiliations: German Aerospace Center, Earth Observation Center, Singapore-ETH Centre
There has been an increase in the number of disasters induced by natural hazards in the last decades. Such events can cause huge losses, especially in human settlements with high population densities. It can be expected that this situation intensifies in the future as the world’s population grows in numerous hazard-prone regions across the globe and climate change increases the number of both single and multi-hazard situations. As a result, it can be expected that more people will be exposed to natural hazards in the future than ever before. In order to develop mitigation strategies for possible future damage events, detailed information on the future spatial distribution of the population and properties of the built environment, i.e., future exposure, are required. Here Earth observation (EO) datasets and new Artificial Intelligence (AI) techniques offer innovative possibilities. Current EO datasets, in particular long time series data with high temporal and thematic resolution, and new techniques from the field of AI, like Long Short-Term Memory Cells, offer innovative options to extrapolate exposure information spatiotemporally. Here we leverage EO time series data that describe changes in global population and land use since around 2000, while enable a high spatial, temporal, and thematic resolution. The different datasets are preprocessed to the same temporal (~years 2000 – 2020) and spatial extent. In combination with static features, the time series then serves as the basis for a novel AI-model, that identifies characteristic change trajectories in the target variables over time and can extrapolate the target variables correspondingly in spatiotemporal terms into the future (Geiß et al., 2024). By combining multiple target variables, the developed model can exploit multi-task learning, which allows for improved prediction by encoding the interdependencies between the multiple target variables. As a case study, we focus on the megacity of Istanbul, a highly dynamic urban center susceptible to earthquakes and landslides. In a future perspective, the resulting exposure dataset can then be used for an early and sustainable urban planning, risk assessment, and risk reduction efforts in the future, as well as for evaluating the systemic risk and vulnerability of human settlements. For instance, this can be done by linking it to models of natural hazards, to show how many people will be affected in the future. Reference: Geiß, C., Maier, J., So, E., Schoepfer, E., Harig, S., Gómez Zapata, J.C., Zhu, Y., 2024. Anticipating a risky future: long short-term memory (LSTM) models for spatiotemporal extrapolation of population data in areas prone to earthquakes and tsunamis in Lima, Peru. Natural Hazards and Earth System Sciences 24, 1051–1064.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Urban Flood Analysis through SAR Data and Super Resolution DEM Integration

Authors: Ira Karrel San Jose, Sesa Wiguna, Sandro Groth, Marc Wieland, Bruno Adriano, Erick Mas, Shunichi Koshimura
Affiliations: Department of Civil and Environmental Engineering, Tohoku University, International Research Institute of Disaster Science (IRIDeS), Tohoku University, German Remote Sensing Data Center, German Aerospace Center (DLR)
As a direct consequence of extreme weather events such as heavy precipitation and tropical cyclones, flooding is considered as one of the most devastating disasters affecting numerous countries globally. According to the EM-DAT International Disaster Database created by the Center for Research on the Epidemiology of Disasters, flood events dominated in terms of the frequency of disaster occurrence as early as 1970s. In recent years, the growing frequency and severity of urban flood cases have affected millions of people worldwide, causing substantial environmental and economic damages. Therefore, reliable and timely estimation of flood extent after a heavy rainfall event is crucial for disaster management, early warning systems, and post-disaster recovery. Urban environments, characterized by dense infrastructure, pose significant challenges for accurate flood mapping using optical satellite imagery, particularly in areas obscured by clouds or canopies. Moreover, high-resolution digital elevation models (DEMs), essential for precise inundation extent and depth approximation in urban flood mapping, are typically unavailable due to high acquisition costs. To address such limitations, this research proposes an integrated framework that capitalizes on globally available remote sensing datasets and deep learning techniques to improve urban flood mapping. The first phase of the research involves the construction of a convolutional neural network (CNN) that integrates low-resolution DEM, optical imagery, and synthetic aperture radar (SAR) to generate a higher resolution DEM. With the ability of SAR signals to penetrate clouds and dense canopies, addition of SAR data such as coherence and intensity is expected to augment the feature extraction process, alongside with the spectral signature and terrain information derived from optical images and low resolution DEM, respectively. This approach intends to reconstruct high-resolution terrain details from low resolution DEM, creating an enhanced DEM suitable for urban flood mapping and broader hydrological and geological applications. Using the enhanced DEM, the second phase implements a flood segmentation network to detect visible flooded areas captured by optical imagery. For regions obscured by urban infrastructures and dense vegetation, SAR information is integrated in the network to fully delineate the flood extent in the affected sites. The framework is applied to Joso Town, Ibaraki Prefecture, Japan, a region severely devastated by Typhoon Etau on September 10, 2015. The heavy rains brought about by the typhoon resulted to flood depth of up to five meters deep, displacing 22,000 residents and inundating approximately 1,000 buildings. Validation results indicate that the proposed model provided a better approximation of the flood extent using the enhanced DEM as compared to the results relying solely on low-resolution DEM. The proposed methodology offers a scalable and cost-effective solution for improving urban flood risk management through multi-source remote sensing data and deep learning. The approach enhances flood mapping accuracy in data-scarce regions, addressing gaps in the implementation of remote sensing-based flood modelling in urban areas. Future research will test the model’s transferability in different geographic contexts, particularly in areas lacking access to high resolution DEMs, to ensure its robustness and global applicability.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: High resolution flood maps through commercial UAV imagery and deep learning

Authors: Lisa Landuyt, Bart Beusen, Tanja Van Achteren
Affiliations: VITO
Throughout the past years, an increase in both the frequency and intensity of flood events has become clear. Targeted crisis response is thus critical in order to limit human and material losses. Satellite imagery can provide crisis managers a bird eye’s view, and the added value of the Sentinel-1 constellation for flood mapping is beyond doubt. However, in the context of crisis response, the fixed acquisition scheme of Sentinel-1 can be a bottleneck to provide timely insights and even capture the flood (peak). Moreover, it’s resolution is rather coarse for scattered landscapes and urban regions. Lately, several companies have emerged that provide high resolution X-band SAR imagery with flexible tasking, two properties that complement the Sentinel-1 drawbacks. However, this imagery typically comes at a single polarization and the flexible tasking also implies that a reference image to perform change detection is not generally available. While many algorithms have been developed for flood delineation on Sentinel-1 imagery, studies considering high resolution SAR imagery are limited. SAR-based flood mapping approaches are traditionally thresholding-based, complemented by refinements based on auxiliary data using e.g. region growing, decision trees and fuzzy logic. Recently, several studies have demonstrated the superior performance of deep learning architectures. Moreover, self-supervised learning techniques and foundation models, that aim to overcome the limitation of labeled data, are emerging. This study focuses on the usage of high-resolution SAR imagery for flood mapping. A set of 50+ images, provided by Capella Space, is considered to train and assess several deep learning-based workflows. Labels are obtained using a combination of semi-automated labeling and existing maps (e.g. from the Copernicus Emergency Management Service). We compare deep learning architectures, including U-Net with different backbones and Swin Transformer , and assess the added value of auxiliary inputs like incidence angle, reference optical imagery, land cover and building footprints. In addition, we investigate the added value of pre-training using a masked auto-encoder objective on both accuracy and transferability. This work was conducted in context of the FLOWS project, funded by the Belgian Science Policy Office (BELSPO). The authors would like to thank Capella Space for supporting this research.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K2)

Presentation: Enhancing Situational Awareness in Emergency Response: Combining Remote Sensing and Teleoperated Systems

Authors: Magdalena Halbgewachs, Lucas Angermann, Dr. Konstanze Lechner
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center
The application of remote sensing and geospatial technologies is becoming increasingly crucial in addressing multifaceted challenges across environmental monitoring, humanitarian aid, and disaster response. With growing demands for rapid, data-driven decision-making in remote and crisis-prone areas, advanced Earth observation capabilities enable more efficient planning, coordination, and resource deployment. The combination of satellite imagery, high-resolution aerial data, and advanced analytics within situational awareness platforms provides novel opportunities for comprehending and responding to dynamic, often unpredictable environments. The development of an advanced web application for a Global Mission Operation Centre (GMOC) is central to both the MaiSHU and RESITEK projects, demonstrating the progressive enhancement of situational awareness tools. Initially, in the MaiSHU project, the web application was used to support teleoperators in navigating amphibious SHERP vehicles in complex and unstructured environments where traditional humanitarian efforts face operational limitations. In June 2024, during a field campaign in Northern Bavaria, Germany, two realistic scenarios highlighted this application: a food delivery mission to a flood-isolated village in South Sudan, supported by the United Nations World Food Programme (WFP), and a flood evacuation exercise in a dangerous environment with the Bavarian Red Cross (BRK), simulating recent flood events in southern Germany. The interactive web application at the GMOC combines multi-layered geospatial data to provide continuous situational awareness, enable high level route planning, and support real-time operations. It integrates and visualizes multi-layered geospatial and remote sensing data, thereby forming a comprehensive situational picture that is essential for mission preparation and execution. This includes simulated flood masks for the exercise region, which facilitated detailed visualizations of prospective flood zones and aided in the identification and planning of accessible routes for the teleoperated vehicle. Time-series of optical satellite imagery provided valuable insights into the region's evolving landscape, enabling the monitoring of environmental changes that could impact mission safety and route stability. The additional incorporation of high-resolution aerial imagery, taken by DLR aircraft, facilitated a more comprehensive understanding of the terrain and infrastructure. The specific characteristics of the local terrain allowed for the implementation of more precise route adjustments. Further, the integration of up-to-date drone imagery, captured before and during the event itself, proved to be a crucial element in the provision of situational updates. The addition of these datasets in the GMOC web application facilitated the creation of three-dimensional terrain models, thereby enhancing the visual and spatial understanding of the environment necessary for detailed route planning. The described data layers enabled mission planners to optimize SHERP vehicle routes based on variables such as terrain slope, surface type, and radio signal coverage. A designated route segment was designed to traverse a river, validating the SHERP’s amphibious capabilities and testing the continuity of operations under varying surface conditions. The pre-planned routes were transmitted to a Local Mission Operation Centre (LMOC), where remote drivers received precise navigation information, guiding the SHERP vehicle along the safest and most efficient paths based on the analysed remote sensing and geospatial data. The web application’s capabilities included real-time GPS monitoring of the SHERP, providing continuous updates on its location, orientation, and previously travelled paths. This tracking data allowed operators to refine navigation strategies and make informed, real-time adjustments. Supplementary real-time geotagged photo uploads from the field augmented the situational overview, while an integrated communication layer ensured continuous connectivity by highlighting areas with signal coverage. Building on the advancements of MaiSHU, the RESITEK project further develops the web application by integrating additional functionalities and additional vehicle types, including ground and aerial units, both manned and unmanned. The platform in RESITEK is undergoing enhancements to serve as an integrative tool for diverse data visualization and situational analysis, thereby supporting continuous monitoring and user-centric planning. This progression is intended to demonstrate comprehensive interoperability in a collaborative exercise involving various stakeholders and realistic emergency scenarios. The project incorporates AI-based image analysis for real-time monitoring and damage detection, highlighting crisis-relevant information in complex 2D and 3D displays to optimize decision-making during disaster response. The joint developments in MaiSHU and RESITEK underline the essential role of geospatial and remote sensing data in crisis management and emergency response. Together, these projects demonstrate the transformative role of Earth observation technologies—including satellite-based crisis information, real-time data updates, and multi-modal situational displays—in enhancing teleoperated and multi-vehicle mission planning, ultimately facilitating more effective and reliable operations in remote and inaccessible areas.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Session: A.09.04 Glaciers - the other pole - PART 2

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.

Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: A new inventory of the glaciers of Pakistan in 2022 from Sentinel-2

Authors: PhD Davide Fugazza, Anees Ahmad, Blanka Barbagallo, Maria Teresa Melis, Luca Naitza, Marco Casu, Maurizio Gallo, Riaz Ul Hassan, Mohammed Aurang Zaib, Sadia Munir, Arif Hussain, Guglielmina Diolaiuti
Affiliations: Università Degli Studi Di Milano, University of Cagliari, EvK2CNR, EvK2CNR Pakistan
Pakistan is one of the countries suffering the greatest impacts from climate change but also holding one of the largest glacier reservoirs outside the polar regions, across the mountain ranges of the Hindukush, Karakoram and Himalayas; yet the only glacier inventory previously covering the entire country – the GAMDAM inventory - is centred around 2000 and covers a large time span (1993-2009), making area comparisons over time difficult. As part of an international cooperation project, “glaciers and students”, implemented by EvK2CNR and realized by University of Milan together with University of Cagliari, Karakoram International University and University of Baltistan Skardu, we realized a new inventory covering all the glaciers in Pakistan, using a Sentinel-2 mosaic from late summer 2022 as a basis and combining a segmentation approach based on Sentinel-2 optical data and indices with Sentinel-1 interferometric coherence to improve the mapping of debris covered glaciers. In the inventory, we censed more than 13000 glaciers, covering an area larger than 13000 km². Almost all of these glaciers drain into the Indus, while a small amount (3%) drain to the Tarim basin of Central Asia. A large number of glaciers (44%) is smaller than 0.1 km², which puts them at increasing risk from rising temperatures, while only 32 glaciers are larger than 50 km². The inventory further reveals large differences in the glacier distribution across different basins and elevations, mainly driven by topography and the different climatic influences of the area, namely the South Asian Monsoon and the Westerlies, and the complex interplay between these two factors. Preliminary comparison with the GAMDAM inventory, albeit hampered by the large time span of the older inventory, shows relatively stable glacier areas in the Karakoram, with locally large variations mostly caused by surging glaciers. In contrast, in the Himalayan region glacier losses prevail. As part of the project, automatic weather stations were also installed in the Hunza Valley, around Passu, Ghulkin, Shishpar and Pissan Glaciers, and on and around Baltoro Glacier on the route to K2. While the new inventory provides a baseline for future comparisons, the combination with meteorological data will help assess ice volume and meltwater, thus improving the management of water resources in the country and in high mountain Asia.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: DL4GAM: a multi-modal Deep Learning-based framework for Glacier Area Monitoring, trained and validated on the European Alps

Authors: Codrut-Andrei Diaconu, Harry Zekollari, Jonathan L. Bamber
Affiliations: Earth Observation Center, German Aerospace Center (DLR), School of Engineering and Design, Technical University of Munich, Department of Water and Climate, Vrije Universiteit Brussel, Laboratory of Hydraulics, Hydrology and Glaciology (VAW), ETH Zürich, Bristol Glaciology Centre, University of Bristol
The ongoing retreat of glaciers in the European Alps, about 1.3% per year from 2003 to 2015 according to recent inventories, underscores the urgent need for accurate and efficient monitoring techniques. Traditional methods, often relying on manual correction of semi-automated outputs from satellite imagery like Sentinel-2, are time-consuming and susceptible to human biases. In recent years, significant progress has been made in developing fully automated glacier mapping techniques using Deep Learning. In this work we propose DL4GAM: a multi-modal Deep Learning-based framework for Glacier Area Monitoring, available open-source. It includes uncertainty quantification through ensemble learning and a procedure to automatically identify the imagery with the best mapping conditions independently for each glacier. We then use DL4GAM to investigate the evolution of the glaciers in the Alps from 2015 to 2023. We first show that when evaluating the model on unseen data, we find good agreement between the estimated areas and the inventory ones. We also apply DL4GAM on a small set of glaciers from the Swiss glaciers inventory (SGI2016) and show that our results align well with their round-robin experiment, demonstrating high accuracy in the estimated areas and reliable uncertainty estimates. We then analyse the limitations of traditional approaches and highlight the benefit of using elevation change maps as complementary inputs to further improve the mapping of debris-covered areas. Next, we apply the models on data from 2023 and, based on these predictions, estimate the area change both at individual glacier and regional level. However, fully automated methods still face numerous challenges, such as cast shadow, clouds, difficulties in distinguishing debris-covered segments from surrounding rocks or seasonal snow from glacier ice etc. We therefore implemented an outlier filtering scheme in order to remove the glaciers for which the models perform poorly. Finally, we provide annual area change rates over 2015-2023 for ca. 1000 glaciers, covering around 84% of the region. Based on these, we estimate a regional retreat of -1.97 ± 0.67% per year, with significant inter-glacier variability, which illustrates the high sensitivity of the glaciers in this region to climate change.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Glacier surge activity over Svalbard in the period 1991-2015 interpreted using heritage satellite radar missions and comparison to the period 2015-present (Sentinel era)

Authors: Tazio Strozzi, Oliver Cartus, Dr Maurizio Santoro, Thomas Schellenberger, Erik Schytt Mannerfelt, Andreas Kääb
Affiliations: Gamma Remote Sensing, Department of Geosciences, Oslo University
Glacier surging refers to strongly enhanced ice flow speeds over time-periods of months to years. Knowing where and when glaciers show surge-type flow instabilities is important for a number of scientific and applied reasons. The mechanisms of glacier surging and the conditions leading to it are still incompletely understood and questions arise whether and how climate change could impact surge initiation, frequency and magnitude, and therefore on the response of glaciers to climate change. Glacier surges are identified and mapped using a number of (often combined) indicators such as looped moraines, specific landforms in the glacier forefield, exceptional and major glacier advance, exceptional crevassing, sheared-off glacier tributaries or particular patterns of elevation and surface velocity change. Leclercq and others (http://doi.org/10.5194/tc-15-4901-2021) introduced a method to detect surge-type glacier flow instabilities through the change in backscatter that they cause in repeat satellite SAR images. The method was developed based on Sentinel-1 C-band backscatter data between consecutive years. First, aggregated images of maximum backscatter values for each pixel location over the 3-month winter period (January to March for northern hemisphere), when glaciers typically show little other backscatter change due to cold and dry conditions, were created. Then, the normalized difference between the two aggregated maximum images was calculated to search for change in backscatter and to eventually identify surge activity. In order to minimize the effect of variation of topographic effects, the analysis is preferably performed with images taken from the same nominal orbit. Due to Sentinel-1's systematic acquisition strategy, a consistently large number of observations are available over Svalbard every winter for the same nominal orbits. Using this approach, Kääb and others (http://doi.org/10.1017/jog.2023.35) mapped 25 surge-type events over Svalbard in the period 2017-22, a number of surge events that appears to be higher than in previous published inventories or studies (https://doi.org/10.1016/j.geomorph.2016.03.025). The question therefore arises as to whether the increasing number of detected surge events is related to changing environmental or climatic conditions over Svalbard or simply to improved observation capacity in the Sentinel era since 2015. In order to answer this research question and extend back in time before the Sentinel-1 based inventory, we considered heritage satellite radar missions in the period 1991-2015. In particular, at GAMMA we have already processed all the ENVISAT ASAR Image Mode (IM), Wide Swath Mode (WSM) and Global Monitoring Mode (GMM) data available on ESA's G-POD as part of ESA CCI Land Cover to 150 m resolution (http://doi.org/10.1016/j.rse.2015.10.031). We produced over Svalbard ENVISAT ASAR winter backscatter average and change images between 2004 and 2010. In the following, at GAMMA we have also processed with the support of JAXA the global JERS-1 data archive to provide winter backscatter average and change images between 1993 and 1998. Lately, we requested to the ESA’s Heritage Space Programme access to 17709 ERS-1/2 products and 10TB data volume over Svalbard in the period 1991-2011. A fully automated processing chain was implemented to process the ERS SLC data to radiometric terrain-corrected level. Because for the ERS-1/2 mission there are not many repeated winter observations for the same orbital track in consecutive years available and in order to get wall-to-wall coverage over all Svalbard we needed to consider differences for time scales longer than one year. Finally, we also processed a series of Radarsat-2 acquired over Svalbard in Wide mode and Wide Fine mode between 2012 and 2016. Using the ERS-1/2 SAR, JERS-1 SAR and ENVISAT ASAR data we mapped over Svalbard 20 surge-type events in the period 1991-2011. Using the Radarsat-2 SAR data we mapped further 5 surge-type events over the period 2012-2015. Eliminating duplicates with the inventory published by Kääb and others (http://doi.org/10.1017/jog.2023.35), updating start dates and completing the Sentinel-1 analysis with new images from 2023 onwards, we recorded over Svalbard 25 surge-type events in the pre Sentinel-1 period 1991-2015 (25 years) and 28 surge-type events in the post Sentinel-1 period 2016-2024 (9 years), which corresponds to about a threefold increase in surge events in the last period. In our contribution, we will briefly recall the available satellite data and processing steps, and present and discuss the surge catalogue. Acknowledgements This research has been supported by ESA through Glaciers CCI (grant no. 4000109873/14/I-NB. We thank the ESA’s Heritage Space Programme for provision of the ERS-1/2 data archive.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Glacier Snowline Mapping from Sentinel-2 images by Machine Learning

Authors: Prashant Pandit, Dr Thomas Schellenberger, Dr Mattia Callegari, Lorenzo Bruzzone
Affiliations: University Of Trento, Institute for Earth Observationn Eurac Research, Department of Geosciences University of Oslo
Snow accumulation and ablation play a pivotal role in the mass balance of mountain glaciers. Accurate mapping of snow extends on glaciers enhance our understanding of climate change impacts (Larocca et al.) and serve as valuable input for glacier surface mass balance models (Rabatel et al.). For example, the snowline altitude at the end of the summer season serves as a proxy for the Equilibrium Line Altitude and strongly correlated with the annual mass balance (Rabatel et al.). The largest study mapped 3489 snowlines of 269 glaciers of the ~275.000 glaciers from Landsat data between year 1984 and 2022 and found an increase in snow line elevation of approximately 150 meters. This significant rise indicates a reduction in glacier accumulation zones, suggesting negative mass balance trends driven by rising temperatures and shifting precipitation patterns. Regional variability in snowline changes underscores the complex interplay of climatic and local factors, emphasizing the importance of snowline monitoring for glacier health and climate change studies (Larocca et al.). However, manual delineation is time consuming and band- and index threshold-based approaches often fails in areas of steep and complex mountainous terrain and in homogenous snow conditions. The spectral similarity to firn and ice and varying illumination conditions in steep terrain are posing significant challenges for accurate large-scale and long-term snow monitoring Advanced machine learning techniques offer promising solutions to address these limitations by automating snowline detection and improving accuracy under diverse conditions (Prieur et al.). Designed to overcome these limitations by improving accuracy and scalability, this study presents first steps towards region-scale snow cover extend mapping using machine learning. We manually digitized 312 snowlines on 41 glaciers in Scandinavia (13), Svalbard (9), and the European Alps (19) from Sentinel 2 data in the period 2015 to 2023, encompassing a wide range of seasonal snow conditions., Using this benchmark, we trained several machine learning models, including pixel-based classifiers such as Support Vector Machine, Random Forest, and XGBoost (Chen et al.), and U-Net (Ronneberger et al.), a Fully convolutional neural network , and compared them against threshold-based approaches as baselines. The results from Scandinavia demonstrate the superiority of machine learning methods. While threshold-based approaches, such as the Normalized Difference Snow Index (NDSI > 0.4) and Near-Infrared (NIR > 0.11), achieved an Intersection over Union (IoU) score of 0.7147, U-Net significantly outperformed with an IoU of 0.9456. Random Forest was the next best-performing method with an IoU of 0.8957, followed by XGBoost (0.8899) and SVM (0.8887). Adding elevation models and slope data to the classifiers resulted in only marginal performance improvements. This significant improvement highlights the potential of U-Net to accurately capture fine-scale snowline features, especially in heterogeneous and complex mountainous environments with spectrally similar classes such as firn and ice and paves the way for accurate, low-cost, automated and large-scale snow mapping on glaciers. Keywords: Cryosphere, Snow, Snowline, Sentinel-2, Machine Learning Keywords: Cryosphere, Snow, Snowline, Sentinel-2, Machine Learning References Larocca, L. J., Lea, J. M., Erb, M. P., McKay, N. P., Phillips, M., Lamantia, K. A., & Kaufman, D. S. (2024). Arctic glacier snowline altitudes rise 150 m over the last 4 decades. The Cryosphere, 18(8), 3591-3611. A. Rabatel, J.-P. Dedieu, and C. Vincent, “Using remote-sensing data to determine equilibrium-line altitude and mass-balance time series: validation on three french glaciers, 1994–2002,” Journal of Glaciology, vol. 51, pp. 539–546, 2005. Rabatel, A., Bermejo, A., Loarte, E., Soruco, A., Gomez, J., Leonardini, G., ... & Sicart, J. E. (2012). Can the snowline be used as an indicator of the equilibrium line and mass balance for glaciers in the outer tropics?. Journal of Glaciology, 58(212), 1027-1036. Prieur, C., Rabatel, A., Thomas, J. B., Farup, I., & Chanussot, J. (2022). Machine learning approaches to automatically detect glacier snow lines on multi-spectral satellite images. Remote Sensing, 14(16), 3868. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18 (pp. 234-241). Springer International Publishing.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.31/1.32)

Presentation: Estimation of SAR Signal Penetration Depth over Snow/Ice Land Cover Areas using Volume Decorrelation computed from Geocoded TanDEM-X Products

Authors: Nerea Ibarrola Subiza, Lukas Krieger, Marie Lachaise, Dana Floricioiu, Thomas Fritz
Affiliations: German Aerospace Center (DLR)
Digital elevation models (DEMs) derived from SAR interferometry (InSAR) can be significantly affected by volume decorrelation in different land cover areas. Volume decorrelation occurs when radar signals interact with multiple scattering surfaces at different heights and orientations. This effect causes the radar signals to scatter in multiple directions, leading to a loss of coherence. The scattering phase center is consequently located beneath the surface, resulting in biased elevation calculations for the actual land surface. In bistatic radar systems like TanDEM-X, volume decorrelation can be derived from the interferometric total coherence by considering various decorrelation sources affecting the overall coherence [1]. The availability of this interferometric parameter as a layer in the TanDEM-X products is highly valuable for a range of applications, such as the estimation of penetration depth into snow or ice-covered surfaces or into forest canopy. This study presents results obtained by directly using geocoded products within the TanDEM-X DEM Change Map processing chain [2] to compute volume decorrelation for each acquisition. Applying the equations from [3], volume decorrelation, together with the height of ambiguity, are used to derive the penetration depth on ice and snow. This additional information helps to correct the bias between the actual terrain surface and the measured phase center, leading to more realistic elevation estimation. A recent study on Aletsch glacier [4] has observed the elevation bias due to signal penetration in an X-band derived DEM by comparing it to a coincident DEM acquisition from Pléiades optical imagery. Here the elevation bias – averaged per elevation bin – can reach up to 4–8 m in the accumulation area and a mean elevation difference over of -5.59 m which can in turn reduce to about -4.29 m if additional local fine co-registration corrections are applied to this complex topography area. We use these results to validate the circumstances under which a signal penetration correction layer can be used to generate bistatic X-band DEMs that reflect the actual ice/snow surface. Keywords: TanDEM-X DEM, Bistatic Interferometric Coherence, Volume Decorrelation, Penetration Depth, Glacier Mass Change. [1] Rizzoli, Paola, Luca Dell’Amore, Jose-Luis Bueso-Bello, Nicola Gollin, Daniel Carcereri, and Michele Martone. “On the Derivation of Volume Decorrelation From TanDEM-X Bistatic Coherence.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15 (2022): 3504–18. https://doi.org/10.1109/JSTARS.2022.3170076. [2] Schweisshelm, Barbara, and Marie Lachaise. “Calibration of the Tandem-X Craw DEMs for the Tandem-X DEM Change Maps Generation.” In IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, 291–94. Kuala Lumpur, Malaysia: IEEE, 2022. https://doi.org/10.1109/IGARSS46834.2022.9883204. [3] Dall, Jorgen. “InSAR Elevation Bias Caused by Penetration Into Uniform Volumes.” IEEE Transactions on Geoscience and Remote Sensing 45, no. 7 (July 2007): 2319–24. https://doi.org/10.1109/TGRS.2007.896613. [4] Bannwart, Jacqueline, Livia Piermattei, Inés Dussaillant, Lukas Krieger, Dana Floricioiu, Etienne Berthier, Claudia Roeoesli, Horst Machguth, and Michael Zemp. “Elevation Bias Due to Penetration of Spaceborne Radar Signal on Grosser Aletschgletscher, Switzerland.” Journal of Glaciology, April 30, 2024, 1–15. https://doi.org/10.1017/jog.2024.37.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Session: C.06.03 Validation of GNSS-RO and GNSS-R observations from small sats

GNSS-Radio Occultation (RO) for atmospheric sounding has become the first Pilot Project of integrating institutional (e.g. from MetOp) and commercial RO measurements into operational Numerical Weather Prediction (NWP) led by NOAA and EUMETSAT. The path for this achievement was preceded by a number of studies for Calibration, Data Quality and Validation through Impact assessments including complementary observations from other sensors. Innovation continues in GNSS-RO, with for example Polarimetric-RO, and more studies are on-going and can be presented in this session.

A number of GNSS-Reflectometry (GNSS-R) commercial missions have been launched since in the last 10 years mostly driven by wind-speed applications, and more are planned for 2025 like ESA Scout HydroGNSS with significant innovations and with primary objectives related to land applications. Like for GNSS-RO, a number of Data Quality and Validation studies are on-going or being planned, and if successful, GNSS-R could also make it to operational systems.

This session is intended for the presentation of this kind of studies related to the assessment of GNSS measurements typically from miniaturised GNSS EO receivers in commercial initiatives.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Recent Validation Activities for GNSS-R and -RO Products from Spire

Authors: Matthieu Talpe, Philip Jales, Vu Nguyen, Jessica Cartwright, Giorgio Savastano, Sanita Vetra-Carvalho, Claudio Navacchi, Ben Yeoh
Affiliations: Spire Global, Spire Global, Spire Global, Spire Global
Spire Global is an Earth Observation company operating multi-purpose nanosats in a variety of orbits. One of the core activities of Spire’s constellation is passive remote sensing of L-band signals using its in-house designed and manufactured multi-GNSS STRATOS receiver. After several successful data pilots spanning 2016 to 2019, Spire Global has been providing RO measurements to weather agencies NOAA and EUMETSAT for operational data buys since 2020 and 2021, respectively. Over the last few years, Spire has further collaborated with agencies on the development and validation of ionospheric, near-nadir GNSS-R (ocean winds, soil moisture), and Polarimetric RO products. The aim for these products is to contribute to global assimilation models, whether for space weather models (such as GloTEC) or numerical weather prediction (NWP) models. In 2022, NOAA Space Weather Prediction Center conducted a weather data pilot on absolute slant Total Electron Content (TEC) and electron density profiles and S4 scintillation indices from Spire and PlanetiQ. It was shown that TEC products exhibit good accuracy and greatly improve spatial coverage. The study also generated recommendations to improve the detection of scintillation events. In 2023, Spire operated new polarimetric (dual-polarized H and V) antennae onboard three nanosats and collected novel precipitation-sensitive RO datasets. The next (and currently ongoing) step is to refine forward operators and enable the assimilation of these PRO datasets into NWP models. This work is carried out at ECMWF. The NOAA Ocean Surface Winds Data Pilot conducted between 2023 and 2024 evaluated the suitability of operational Near Nadir GNSS-R for ocean winds and Mean Sea Slope (MSS) products. At least 500 tracks of GNSS-R Level 1 calibrated radar cross section products were delivered daily between 25 January and 24 July 2024 with strict latency (<3 hr) requirements. Over 17 institutional and governmental groups participated in this pilot to use the Level 1B and Level 2 GNSS-R data over ocean and land surfaces. A recent NASA Commercial SmallSat Data Acquisition (CSDA) program evaluation demonstrated agreement between the Level 2 Soil Moisture estimates from Spire GNSS-R reflections and SMAP, and of the L2 Ocean product with respect to ECMWF, ERA5 and CYGNSS. Lastly, the ESA EarthNet Data Assessment Pilot (EDAP) has provided assessments of GNSS-R datasets and is in the process of evaluating PRO and grazing angle reflection products. The Spire constellation continues to be replenished, with six new RO and R satellites launched on SpaceX Transporter 11 in August 2024, including the first two combined R+RO platforms, and several more in upcoming rideshare launches. Spire encourages and supports the research community to further explore the Spire GNSS datasets. Free access is provided by the NASA CSDA program for US-funded researchers. ESA Third Party Mission also provides access to a select set of datasets for researchers worldwide.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Meta-Mission of GNSS-R Satellites: Investigating the Potential of 40+ LEO Satellites with Reflectometry Payloads

Authors: Estel Cardellach, Tianlu Bai, Yan Cheng, Philip Jales, Cheng Jing, Weiqiang Li, Wenqiang Lu, Manuel Martín-Neira, Dallas Masters, Chris Ruf, Martin Unwin
Affiliations: Institute of Space Sciences (ICE-CSIC, IEEC), Tianmu Aerospace (Chongqing) Satellite Technology Co., Ltd., Yunyao Aerospace, Spire Global, Inc., CAST-Xi'an,CAST-Xi'an, National Satellite Meteorological Centre (NSMC, CMA), European Space Agency (ESA), Muon Space, Inc., University of Michigan, Surrey Satellite Technology Ltd.
A collaborative effort is made between scientists and commercial and non-commercial GNSS reflectometry (GNSS-R) data providers to show the potential of the full current GNSS-R systems. This ‘mission of missions’ consists of over 40 satellites in low Earth orbit (LEO), under different architectures, platform sizes and payload designs, the data of which are shared through a neutral, multi-lateral arrangement. The experiment, called METACONRef, aims at analysing them as a single large meta-constellation of reflectometers to answer the following main questions: - What are the benefits of a 40+ GNSS-R satellite meta-constellation? How is the resulting spatio-temporal coverage? What scientific cases could be resolved with such density of observations that cannot be properly monitored with the current Earth Observation system? - What are the challenges of a GNSS-R meta-constellation? How limiting are factors such as differences in format; differences in instrumental parameters; inter-calibration of the power measurements; or inhomogeneous quality control and uncertainty characterization? Are further homogenization actions required towards common use of such diverse system of systems? - What is the comparative cost-benefit of such a system? Does the improvement in the performance justify the cost of incremental satellites? The analysis will be made on actual GNSS-R data collected from spaceborne platforms that belong to missions developed by both national space agencies and commercial companies: NASA CYGNSS (currently seven satellites), Spire Global Near-Nadir satellites (four satellites), Muon Space (two satellites), FengYu3 (three satellites), BuFeng-1A/B (two satellites), Tianmu (23 satellites), Yunyao (two satellites), UK DoT-1 (one satellite), and other GNSS-R payloads in LEO providing open access data (e.g., TRITON) or to be joined later in the experiment (e.g., ESA HydroGNSS and EOS-08, TBC). Comparative studies between the different missions will be avoided, focusing the effort on the combined performance over particular case studies to explore spatio-temporal resolution and sensitivity of the Level-1 (observables) products. Different inter-calibration strategies are investigated, and the re-calibrated observables exploited over small-scale quickly-evolving events to test the limits of the current resolution. This can cover different applications such as flooding, storms and wildfires. Other challenging applications can also benefit from large oversampling to enhance their performance, such as ocean altimetry (e.g., under conditions that limit the performance or operability of active dedicated sensors). Focusing on Level-1 observables addresses a two-fold strategy: on the one hand, we foresee difficulties in homogenizing the different retrieval algorithms behind the Level-2 products independently produced from each of the missions. On the other hand, other GNSS remote sensing techniques (e.g., radio occultation) have proven it feasible and even desirable to assimilate level-1 products in operational modelling systems, rather than assimilation of Level-2 retrievals. The analysis of the impact on operational services (e.g., numerical weather prediction, NWP) is not considered in this first experiment due to (1) the relatively low maturity of GNSS-R level-1 assimilation into operational services and (2) the long time series required to properly test the impact (at least six months, three during northern hemisphere summer/southern hemisphere winter and three more during northern hemisphere winter/southern hemisphere summer). We believe this is a timely experiment the output of which has potential to assist space and science funding agencies tracing the roadmap towards an optimal use of this opportunistic and cost-effective technique. This comes in a moment where new missions are about to be launched (e.g., ESA twin-satellites HydroGNSS, Brazil Amazonia-1B) or being considered for development (e.g., Spanish component of the Atlantic Constellation), while commercial operators have started delivering data through pilot contracts with major operational and scientific agencies. Recommendations will be issued based on the outcome of the experiment.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: GNSS-R land data assimilation at ECMWF

Authors: Patricia de Rosnay, Estel Cardellach, David Fairbairn, Sébastien Garrigues, Eleni Kalogeraki, Nazzareno Pierdicca, Peter Weston
Affiliations: ECMWF, ECMWF, ICE-CSIC, IEEC, INGV
This paper presents activities starting at ECMWF (European Centre for Medium-Range Weather Forecasts) to investigate the impact of GNSS-R data assimilation for Numerical Weather Prediction (NWP) and future climate reanalyses. The ECMWF NWP system relies on a coupled land-atmosphere-ocean model. In terms of data assimilation, a dedicated land data assimilation system (LDAS) is used to analyse soil moisture and temperature variables using a simplified Extended Kalman Filter (SEKF) approach. Here, we are developing the ECMWF LDAS capability to assimilate level-1 GNSS-R observations. A machine-learning based observation operator is being developed to simulate level-1 GNSS-R data, using model features that include soil moisture and vegetation leaf area index (LAI). The training dataset uses an ECMWF land surface reanalysis conducted in preparation of ERA6-Land and the CYGNSS GNSS-R data. The machine learning approach is based on a gradient boosted (XGBoost) decision tree. The approach is presented here. A detailed information content analysis is conducted to analyse the most important features that contribute to the signal. Plans to ingest GNSS-R data into the ECMWF SEKF are presented. The work includes a detailed analysis of the Jacobians, development of the data quality control in the data assimilation system, and update of the SEKF data assimilation to include GNSS-R data in the observation vector. The work is being conducted in the context of the preparation of HydroGNSS, using existing data (e.g. CYGNSS here and extended to SPIRE in the future). GNSS-R land data will be assimilated in the ECMWF system to evaluate the impact both on land surface variables and for atmospheric NWP. The plan is also to combine GNSS-R and L-band passive microwave data from SMOS (Soil Moisture and Ocean Salinity) observations and to assess the impact of each observation type either individually or combined. Potential benefit for both NWP and future climate reanalyses will be discussed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: The Radio Occultation Modeling Experiment (ROMEX)

Authors: Christian Marquardt, Hui Shao, Dr. Benjamin Ruston, Richard Anthes
Affiliations: EUMETSAT, UCAR
The international radio occultation (RO) community is conducting a collaborative effort to explore the impact of a large number of RO observations on numerical weather prediction (NWP). This effort is named the Radio Occultation Modeling Experiment (ROMEX) and it has been endorsed by the International Radio Occultation Working Group (IROWG) in 2022, a scientific working group under the auspices of the Coordination Group for Meteorological Satellites (CGMS) in close coordination with the user community such as WMO, IOC-UNESCO, and other user entities. ROMEX seeks to answer some of the more pressing technical and programmatic questions facing the community and help inform the near- and long-term strategies for RO missions and acquisitions by NOAA, EUMETSAT, and other CGMS partners. Most important among these questions is to quantify the benefit of increasing the quantity of RO observations. ROMEX is envisioned to consist of at least two three-month periods during which all available RO data are collected, processed, archived, and made available to the global community free of charge for research and testing. Although the primary purpose is to test the impact of varying numbers of RO observations on NWP, the three months of RO observations will be a rich data set for research on many atmospheric phenomena. The first ROMEX period (ROMEX-1) covers September through November 2022, which contains a number of tropical cyclones that can be studied. The international community and representatives of the IROWG are currently finalising the execution of ROMEX-1. RO data providers have sent their data to EUMETSAT for repackaging and, in some cases, reprocessing in a uniform way. The processed data (phase, bending angle, and refractivity) was made available to registered ROMEX participants by the ROM SAF. The data was also be processed independently by both the UCAR COSMIC Data Analysis and Archive Center (CDAAC) as well as NOAA STAR divison. The data are available to all participants at no charge, with the conditions that the providers be acknowledged and the data not be used for any commercial or operational purposes. This presentation will provide an overview of GNSS-RO data currently available from both public and commercial sources and introduce the rationale for ROMEX. We will then summarise the results obtained so far.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Developing a forward operator for GNSS polarimetric radio occultation observations

Authors: Katrin Lonitz, Dr Sean Healy, Estel Cardellach, Ramon Padullés
Affiliations: ECMWF, ECMWF, Institute of Space Studies of Catalonia (IEEC)
GNSS radio occultation (GNSS-RO) measurements are now an established component of the global observing system, providing vertical profiles of temperature and water vapour content of the atmosphere. Increasing volumes of GNSS polarimetric radio occultation (GNSS-PRO) observations are now becoming available with the introduction of GNSS receivers in low-earth orbiters being able to measure the GNSS signals in both the vertical and horizontal polarisation direction. These measurements extend the information content of conventional GNSS-RO and enable the retrieval of oriented hydrometeor particle information along the ray path (Cardellach et al., 2019). They are of potential interest for operational numerical weather prediction (NWP). As a first step towards using these observations in both data assimilation and model diagnostics, a forward operator for the GNSS-PRO observable polarimetric differential phase shift has been developed as shown by Hotta et al. (2024). In this forward operator, which is designed for operational NWP applications, ‘effective’ values of hydrometeor density and axis ratio have been used to calculate the specific differential polarimetric phase shift (Kdp). Here, we show how large the impact is when refining these values for the different hydrometeors. Also, we explore a new formulation of Kdp based on particle-scattering, using hydrometeor habits as provided in ARTS (Eriksson et al., 2019). The advantage of this approach is that the assumptions are consistent with those in other modules of the NWP models, such as the radiative transfer (Geer et al., 2021). Ultimately, we show the differences between them for some case studies. The implications when assimilating the GNSS-PRO observations will be discussed. Cardellach, E., et al. (2019). Sensing heavy precipitation with GNSS polarimetric radio occultations. Geophysical Research Letters, 46(2), 1024-1031. Eriksson, P., et al. (2018), A general database of hydrometeor single scattering properties at microwave and sub-millimetre wavelengths, Earth System Science Data, 10(3), 1301-1326. Hotta, D., K. Lonitz, and S. Healy (2024), Forward operator for polarimetric radio occultation measurements, Atmos. Meas. Tech., 17, 1075–1089. Geer, A. J., et al. (2021), Bulk hydrometeor optical properties for microwave and sub-millimetre radiative transfer in RTTOV-SCATT v13.0, GMD, 14, 7497–7526.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.34)

Presentation: Comprehensive analysis of spaceborne GNSS reflectometry for precision altimetry

Authors: Dr. Sajad Tabibi, Dr Raquel N. Buendia
Affiliations: Faculty of Science, Technology and Medicine, University of Luxembourg
Global Navigation Satellite System-Reflectometry (GNSS-R) has emerged as a versatile and cost-effective technique that complements traditional remote sensing methods. By using reflected GNSS signals, GNSS-R enables all-weather operation, supports the monitoring of diverse surface types. Its applications span soil moisture estimation, ocean altimetry, and ice dynamics monitoring. This study explores the potential of Grazing-Angle GNSS-R (GG-R) carrier-phase measurements for precision altimetry, focusing on retrieving sea-level anomalies (SLA) and monitoring polar regions. Data from Spire Global Inc.’s Radio Occultation (RO) constellation are compared with conventional radar altimetry missions, including Sentinel-3A/3B, Saral, and CryoSat-2. Over a period of more than two years, SLA data analyzed with 1-day intervals and 10 km spatial resolution using dual-frequency GPS measurements yielded an average RMSE of approximately 47 cm. The analysis highlights complementary strengths between the two methods, with GG-R providing valid measurements in scenarios where radar altimetry is unavailable, and vice versa. A focused evaluation of GG-R-specific collocated events confirmed the consistency of Spire’s retrievals, achieving an RMSE of 25 cm. These findings demonstrate GG-R’s ability to enhance spatio-temporal resolution and address coverage limitations in conventional systems, establishing it as a valuable tool for advancing altimetry in challenging environments.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Session: D.01.04 Using Earth Observation to develop Digital Twin Components for the Earth System - PART 2

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: An EO-informed Digital Twin Component for Glaciers

Authors: Fabien Maussion, Julia Bizon, Inés Dussaillant, Alexander Fischer, Noel Gourmelen, Livia Jakob, Rischard Lane, Thomas Nagler, Samuel Nussbaumer, Carlos Pereira, Patrick Schmitt, Gabriele Schwaizer, James Thomas, Michael Zemp
Affiliations: University of Bristol, Earthwave, World Glacier Monitoring Service (WGMS), University of Innsbruck, ENVEO
Mountain glaciers are critical elements of the Earth’s hydrological and climate systems. The retreat and mass loss of glaciers globally not only contribute significantly to sea-level rise but also have profound implications for water resources, hydropower, agriculture, and natural hazards. The rapid changes in glaciers due to climate change impacts our ability to monitor and address these associated risks effectively. To address these challenges, we present the Digital Twin Component for Glaciers (DTC Glaciers), a pioneering initiative under ESA’s Digital Twin Earth (DTE) program. Leveraging the latest in Earth Observation (EO) data, advanced modelling techniques, and AI, DTC Glaciers will assimilate heterogeneous information from in-situ observations and EO to produce a centralised product that transcends the capabilities of individual datasets. Users will be able to interrogate the DTC to derive valuable insights into glacier changes in area, volume, mass and runoff and implications for communities and ecosystems. In this presentation, we showcase how our DTC prototype can be used to address two main challenges faced by scientists and stakeholders in the Alps and in Iceland. The first challenge is the estimation of glacier runoff which is calculated by integrating EO products and meteorological information into models. Here we show how a data assimilation platform informed by EO reduces uncertainties compared to currently available approaches to estimate daily runoff, a critical variable for water management in glaciated mountain basins, and essential information for hydropower operation and downstream irrigation. The second challenge relates to the fundamental capacity of DTCs to adapt to user actions and near real-time changing conditions in the physical world. In this presentation, we will show how our DTC prototype leverages cloud computing to allow interaction with the twin, permitting users to inform the tool with independent data - in a first step, in-situ observations of glacier mass-balance. DTC Glaciers aims not only at advancing glacier monitoring but also demonstrates the transformative potential of digital twins in addressing global climate challenges. While its initial focus is regional and at the demonstrator level, the scalable design of DTC Glaciers positions it as a blueprint for future global-scale implementations, ensuring its relevance for both scientific research and operational decision-making within the broader context of ESA’s Digital Twin Components and the Destination Earth (DestinE) initiative of the European Commission.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Digital Twin Earth: Coastal Processes and Extremes

Authors: Dr Daniel Morton, Jan Jackson, Martin Jones, Dr Steve Emsley, Anne-Laure Beck, Antoine Mangin, Jean-Michelle Gauzan, Jean-Michelle Rivet, Dinesh Kumar-Babu, Natascha Mohammedi, Dr Dominique Durand, Dr Andres Payo Garcia, Patrick Matgen, Jefferson Wong, Professor Ivan Haigh, Dr Hachem Kassem, Dr Claire Dufau, Laurine Maunier, Isabella Zough, Dr Oscar Serrano Gras
Affiliations: Argans Ltd, ACRI-ST, Covartech, AdwäisEO, LIST, CLS, British Geological Society, The University of Southampton, Biosfera, ONACC
The European Space Agency's (ESA) Digital Twin Earth (DTE) project is a cutting-edge initiative to create a highly accurate digital replica of Earth, designed to simulate physical, biological and social systems and support the analysis of the planet's dynamics in near-real time. It will integrate vast amounts of data from satellite observations, ground measurements, advanced computation, AI/ML and process models to provide insights into Earth's systems, such as climate, oceans, forests, and human activities. Under the DTE project ARGANS (UK), with sister companies adwäisEO (Luxembourg) and ACRI-ST (France), partnered with Biosfera (Spain), the British Geological Society (UK), CLS (France), COVARTEC (Norway), LIST (Luxembourg), ONACC (Cameroon) and the University of Southampton (UK) have been given the opportunity to develop a digital twin to represent Coastal Processes and Extremes. This involves designing and implementing a digital twin architecture within the ESA DestinE platform to showcase four coastal use cases, these include EO-supported models of (i) coastal erosion, (ii) coastal flooding, plus (iii) mangrove and (iv) sargassum dynamics to understand their effects upon ecosystem health, biodiversity and consequent economic impacts. Outputs from the digital twin will enhance disaster preparedness and response by improving predictions of storm surges and flooding, by providing information to support evacuation scenarios and identifying vulnerable infrastructures and communities to enable pre-emptive measures. It will support climate change adaptation by tracking changes in coastline dynamics, identifying areas for adaptive infrastructure such as seawalls and green buffers and in doing so help policy makers weigh trade-offs between development and environmental conservation under ‘what if?’ scenarios. It will advance environmental conservation by tracking marine and coastal habitats enabling targeted conservation efforts to support mangrove restoration and sargassum control bringing potential benefits of healthier marine ecosystems, increased biodiversity, sediment stabilisation and carbon storage. Accessible dynamic visualisations will make scenarios easy to understand and promote education and public awareness, allowing communities to participate in planning and advocate for their interests. This project began in early 2025 so here we report on early progress, technical challenges and solutions so far, so that other digital twin practitioners can learn from our experience. We shall contrast our digital twin with other coastal modelling and digital twin activities to emphasise the scientific and societal benefits brought by the digital twin concept and increase awareness of the wider activities in this scientific area. Finally, we shall share thoughts on how the coastal processes and extremes digital twin can integrate with partner projects to realise a fully integrated operational system of systems (SoS) to replicate the Earth’s dynamics.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Geohazard DTC: the GET-it project

Authors: Salvatore Stramondo, Hugues Brenot, Stefano Corradini, Arnau Folch, Gaetana Ganci, Fabrizio Pacini, Elisa Trasatti, Daniela Fucilla
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Terradue, Consejo Superior de Investigaciones Científicas, Royal Belgian Institute for Space Aeronomy
The ESA GET-it (Geohazards Early Digital Twin Component) project proposes a holistic approach to a DTC (Digital Twin Component) system. building upon the exploitation of multi-sensor EO data and AI techniques. It is designed to leverage Copernicus data and advanced algorithms to generate information services for geohazards that address real needs of institutional and commercial stakeholders. GET-it benefits from the long-lasting experience of leading geohazard researchers from well-established geohazard Community represented by INGV (Istituto Nazionale di Geofisica e Vulcanologia), CSIC (Consejo Superior de Investigaciones Científicas) and BIRA (Royal Belgian Institute for Space Aeronomy), having over two decades of experience on integrating satellite, airborne, and ground-based observations along with complex simulations to better understand geohazard processes and develop solutions for the preservation of lives and the protection of valuable assets. TerraDue, the technological partner of GET-it, leads the innovative services for data-intensive applications. GET-it offers a portfolio of cutting-edge processors and products fully exploiting EO data encompassing the whole spectrum of volcano- and seismic-related geohazards, in order to support information services. The Geohazard DTC is designed as a customizable environment supporting the stakeholders communities to design accurate and actionable adaptation strategies and mitigation measures. GET-it addresses the needs and targets public institutions, decision makers and private customers (among others, aviation stakeholders, engine manufacturers, the insurance sector, road/infrastructure authorities, and energy providers) dealing with geohazards. The increased complexity of modern society and the key role of infrastructures (energy, transportation, and services) in the welfare of citizens demands a proper management of the impact of different hazards. In particular, the Critical Infrastructures (CI) are technological systems which ensure the production and the delivery of primary services to citizens. The roadmap for satellite EO data in geohazards management and the expected developments of EO in the forthcoming decades have been first traced in the International Forum on Satellite Earth Observation and Geohazards (the Santorini Conference) in 2012. The occurrence of a geohazard implies sudden, unpredictable, and cascading effects. The impact to activities and manufactures due to geohazards are based on the exposure and the vulnerability. The Sendai Framework recommends a number of actions at the State level, based on the concept that government policies should evolve from merely managing disasters to managing risks, i.e., establishing effective prevention measures. Therefore, a fundamental, detailed comprehension of all risk elements relating to disasters is essential. This applies to geohazards, and it is the inspiring principle beyond the proposed Geohazard DTC deployed in GET-it. In the wider scope of the ESA DTC Programme element, GET-it specifically pertains to the creation of a "Digital Twin Earth" that simulates the Earth system based on EO data. This not only aims to provide a virtual representation of Earth but also to predict future environmental conditions and occurrences, focusing on natural disasters such as volcanic eruptions and earthquakes. The Geohazard DTC is a component of this larger framework and serves the following specific functions: - What-If Analysis for Disaster Preparedness: A key feature of the DTC will be its ability to perform what-if analyses, allowing users to assess potential interventions and their impacts on disaster outcomes, thereby enhancing preparedness and mitigation strategies. - Integration with Global Efforts: The DTC will be designed to integrate seamlessly with international monitoring and response efforts, providing a tool that complements and enhances global capabilities to manage geohazard risks. - Support for ESA Policies and Directives: The development of the Geohazard DTC supports ESA’s policies on disaster risk reduction, climate change, and sustainable development, making it a strategic component of the Earthwatch Programme. GET-it has engaged several stakeholder communities, pertaining to the policy and decision makers, emergency managers and scientists. GET-it will be demonstrated in three use cases, the 2018 eruption of Mount Etna (Italy), the Central Italy 2016 earthquake sequence, and the 2022 eruption at La Palma (Canary Islands).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Hydrology analyses in mountain basins for a Decision Support System in a Digital Twin of Alps

Authors: Matteo Dall'Amico, Maxim Lamare, Federico Di Paolo, Stefano Tasin, Nicolò Franceschetti, Luca Brocca, Silvia Barbetta, Sara Modanesi, Bianca Bonaccorsi, Jean-Philippe Malet, Clément Michoud, Thierry Oppikoffer, Philippe Bally
Affiliations: Waterjade Srl, Sinergise Solutions GmbH, Research Institute for Geo-Hydrological Protection, CNR-Irpi, Institut Terre et Environnement de Strasbourg - University of Strasbourg, Ecole et Observatoire des Sciences de la Terre - University of Strasbourg, Terranum srl, European Space Agency - Esrin
The Alps are the most densely populated mountain range in Europe. As a result, hydrological hazards constitute a major threat to human activity, and water resources play a central role in socio-economic developments (agriculture, tourism, hydropower production...). Furthermore, the Alps are particularly sensitive to the impacts of climate change. Over the last century, temperatures have risen twice as fast as the northern-hemisphere average, whereas precipitation has increased non-linearly. Because of the increasing pressure on human settlements and infrastructure, there is a strong priority for policy-makers to implement climate change adaptation strategies from the local to the regional scale. To support and improve the decision-making process, numerical decision support systems provide valuable information derived from observations or models to better manage increasing threats and weaknesses. For such a reason, through the Digital Twin Earth programme (https://dte.esa.int/), ESA is encouraging the development of technological projects addressed to the implementation of operational Digital Twins ecosystems. The main objective of the ESA-funded Digital Twin of Alps project (https://digitaltwinalps.com/) is to provide a roadmap for the implementation of future Digital Twin Earth (DTE) instances, with a focus on the Alpine context. A demonstrator has been developed to act as a decision support system representing the major environment-related risks and impacts faced by populations living in the Alps, as well as water resource management indicators. Regarding hydrology, different parameters evaluated through Earth Observation, in-situ data and physical modeling are reported for reanalysis, monitoring forecast and decision-making purposes. Reanalysis and monitoring of snow parameters (snow depth, snow cover area, Snow Water Equivalent - SWE) and hydrological variables (soil moisture, river discharge) enable a quasi real-time tracking of the evolution of the water content in a basin. The nowcast (+ 3 days) of snow melt is used as an input for the landslide monitoring and risk assessment. Hydrology-related anomalies with respect to the historical mean (i.e., SWE, soil moisture, evapotranspiration and precipitation) enable a quick understanding of the current water budget in the user-selected area, and can be used as starting point to predict future evolution (e.g., the SWE anomaly at the end of the winter can give information about the possibility of drought during the summer). Finally, two Decision Support Systems (DSSs) help the user in evaluating possible future scenarios, to efficiently tackle the problem of water-related extreme events.The flood DSS evaluates the flooded area around a river as a function of precipitation return period, freezing level and soil moisture content. The drought DSS evaluates the river discharge in a section (expressed as percentiles calculated with respect to the average historical value over the 2002-2022 period) depending on temperature, precipitation, snow storage content and the presence of hydraulic works. The future development of a DTE will enable a comprehensive tool for monitoring and predicting extreme events related to natural hazards, enabling a timely and effective mitigation of the associated risk.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Development of an Agriculture Digital Twin Infrastructure Model

Authors: Rajat Bindlish, Dr. Pang-wei Liu, Dr. Jessica Erlingis, Meijian Yang, Dr. Shahryar Ahmad, James Geiger, Luke Monhollon, Sujay Kumar, Alex Ruane, Zhengwei Yang, Gary Feng, Yanbo Huang
Affiliations: NASA Goddard Space Flight Center, Science Systems and Applications, Inc., Earth System Science Interdisciplinary Center, University of Maryland, NASA Goddard Institute for Space Studies, Center for Climate Systems Research, Climate School, Columbia University, Science Applications International Corporation, Kellogg Brown & Root (KBR), US Department of Agriculture, National Agricultural Statistics Service, US Department of Agriculture, Agriculture Research Service
Crop growth, yield, and production information is critical for commodity market, food security, economic stability, and government policy formulation. Current agricultural models required weather forcings such as precipitation, temperature, and solar radiation along with historical data as key field parameters to develop estimates for field operations schedules, from seeding to harvesting, with fertilizer and herbicide treatments in-between. Although current crop growth models provide rigorous modules to simulate crop development, they lack rigorous water balance and hydrologic processes. On the other hand, hydrology models lack more in-depth simulation of crop development stages and farm management. Coupling hydrology and crop growth models with interdependent constraints will leverage their complementary strengths to improve the estimates of hydro-agricultural variables. Land Information System (LIS) was coupled with Decision Support System for Agrotechnology Transfer (DSSAT) model to estimate crop growth stages, biomass, and crop yield for different conditions. The coupled model framework is capable of directly utilizing the LIS’s built-in modules to assimilate remotely sensed data such as soil moisture and LAI to update and improve the model simulations. In the presentation, we will demonstrate the capability of the developed digital twin framework and explore the impact of weather (precipitation, soil moisture, temperature) and climate on crop yield. The framework will provide an unprecedented useful tool to support the best management practices for farming systems and productivity outlooks for agricultural decision makers.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall K1)

Presentation: Towards a Digital Twin of Tropical Wetland Methane Emissions

Authors: Rob Parker, Cristina Ruiz Villena, Khunsa Fatima, Chandana Pantula, Nic Gedney, Paul Palmer
Affiliations: National Centre for Earth Observation, School of Physics and Astronomy, University Of Leicester, Met Office Hadley Centre, National Centre for Earth Observation, School of Geosciences, University of Edinburgh
Recent unexplained and significant increases in atmospheric methane (CH₄) highlight an increasingly urgent need to understand how tropical wetlands are responding to climate change and how potential methane-climate feedbacks are driving such increases. As we try to achieve Net Zero targets and meet commitments to the Methane Pledge, it is vital that we understand the background of underlying natural emissions upon which anthropogenic emissions are added. Climate feedbacks which accelerate natural emissions could undermine any benefit from reducing anthropogenic emissions and significantly change advice given to policymakers. To address this challenge, we propose to combine state-of-the-art modelling capabilities with the wealth of observational data and make intelligent use of machine-learning analysis methods. We will accomplish this by developing a novel, dedicated and focused Tropical Wetland Digital Twin. Environmental Digital Twins are an emerging paradigm, incorporating Earth System modelling, Earth Observation (EO) and Artificial Intelligence (AI), to provide new environmental insights and provide stakeholders with the ability to ask data-driven and evidence-led questions. Our Digital Twin will bring together our best capabilities for observing and predicting wetland emissions and make these results useful to researchers, policymakers or anyone who needs to ask questions about how the Earth System responds to changes. It will enable new types of analysis (emulators providing understanding and explainability); generation of new data (wetland extent maps); new modelling capabilities (wetland methane-climate feedbacks in climate projections); and improved decision support (widely democratised access to tools and data). This work details the first steps towards such a Digital Twin, focusing on our development of machine-learning based emulators for the JULES land surface model. The development of such emulators allows: fast and efficient simulations of large ensembles to explore the complex parameter space; exploration of driving factors through Explainable AI; model-data fusion incorporating EO data with model responses; and the deployment as Digital Twin components into wider climate services.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Session: C.02.06 Swarm - ESA's extremely versatile magnetic field and geospace explorer

This session invites contributions dealing specifically with the Swarm mission: mission products and services, calibration, validation and instrument-related discussions. It is also the session in which the future and evolution of the mission, and the future beyond Swarm will be discussed. Particularly welcome are contributions highlighting observational synergies with other ESA and non-ESA missions (past, current and upcoming), in addition to ground-based observations and modelling.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Swarm Investigation of Ultra-Low-Frequency (ULF) Pulsation and Plasma Irregularity Signatures Potentially Associated With Natural Hazards

Authors: Georgios Balasis, Angelo De Santis, Constantinos Papadimitriou, Zoe Boutsi, Gianfranco Cianchini, Omiros Giannakis, Stelios M. Potirakis, Mioara Mandea
Affiliations: Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, Istituto Nazionale di Geofisica e Vulcanologia, Department of Physics, National and Kapodistrian University of Athens, Department of Electrical and Electronics Engineering, University of West Attica, Centre National d’Etudes Spatiales
Launched on 22 November 2013, Swarm is the fourth in a series of pioneering Earth Explorer missions and also the European Space Agency’s (ESA’s) first constellation to advance our understanding of the Earth’s magnetic field and the near-Earth electromagnetic environment. Swarm provides an ideal platform in the topside ionosphere for observing ultra-low-frequency (ULF) waves, as well as equatorial spread-F (ESF) events or plasma bubbles, and, thus, offers an excellent opportunity for space weather studies. For this purpose, a specialized time–frequency analysis (TFA) toolbox has been developed for deriving continuous pulsations (Pc), namely Pc1 (0.2–5 Hz) and Pc3 (22–100 mHz), as well as ionospheric plasma irregularity distribution maps. In this presentation, we focus on the ULF pulsation and ESF activity observed by Swarm satellites during a time interval centered around the occurrence of the 24 August 2016 Central Italy M6 earthquake. Due to the Swarm orbit’s proximity to the earthquake epicenter, i.e., a few hours before the earthquake occurred, data from the mission may offer a variety of interesting observations around the time of the earthquake event. These observations could be associated with the occurrence of this geophysical event. Most notably, we observed an electron density perturbation occurring 6 h prior to the earthquake. This perturbation was detected when the satellites were flying above Italy. The results obtained here pave the way for exploring other types of events using satellite data, as ionospheric processes and the space-based detection of natural hazards continue to be a multidisciplinary research area. The short- and long-term prospects are promising, even though our current understanding of the coupling between the lithosphere, atmosphere, and ionosphere remains limited. This applies not only to the generation of co-seismic and co-volcanic ionospheric disturbances, which are of particular interest, but also to other solid Earth phenomena, such as slow-slip earthquakes and landslides. To enhance our understanding of this complex coupling, it is essential to investigate the formation mechanisms of these ionospheric disturbances. Moreover, a deeper study of how this coupling varies with solar activity levels, atmospheric conditions, and other factors is necessary. In terms of observations, combining electromagnetic measurements with other data, such as high-resolution GNSS or gravity data, is crucial. This combination could provide new insights into the generation and evolution of ionospheric disturbances caused by natural hazard events and how they develop with altitude. For more details please see: Balasis, G.; De Santis, A.; Papadimitriou, C.; Boutsi, A.Z.; Cianchini, G.; Giannakis, O.; Potirakis, S.M.; Mandea, M. Swarm Investigation of Ultra-Low-Frequency (ULF) Pulsation and Plasma Irregularity Signatures Potentially Associated with Geophysical Activity. Remote Sensing 2024, 16, 3506. https://doi.org/10.3390/rs16183506.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: The Swarm Satellite Trio and Related Spacecraft for Exploring Earth’s Magnetic Field and Its Environment

Authors: Dr Anja Strømme, Dr. Nils Olsen
Affiliations: ESA, DTU
Launched in November 2013, the Swarm satellite trio has provided continuous, accurate measurements of the magnetic field for more than one solar cycle. These measurements are accompanied by plasma and electric field data, precise navigation, and accelerometer observations. Over the years, the constellation has undergone various orbital configurations. These include co-rotating orbits between the side-by-side flying satellites Swarm Alpha + Charlie and Swarm Bravo in 2014, “orthogonal orbital planes” (6 hours difference in Local Time) in 2017, counter-rotating orbits (12 hours difference) in 2021, and the current configuration (June 2025) with a 6-hour difference. These different configurations enable investigations into various aspects of Earth’s magnetic field and geospace, from small-scale to large-scale, covering both solar minimum and maximum conditions. In addition to providing simultaneous measurements of the geomagnetic field from different locations in space, the highly accurate absolute Swarm magnetic data allow for the calibration of data from navigational magnetometers onboard satellites like Cryosat-2, GRACE, GOCE, and GRACE-FO. This further enhances the space-time sampling of magnetic data provided by LEO satellites, though (due to the reduced absolute accuracy of these additional data) mainly for investigations of ionospheric and magnetospheric sources. Since May 2023, the polar-orbiting Swarm satellites have been augmented with the low-inclination MSS-1 (Macau Science Satellite 1), significantly extending the coverage in space and time. Additionally, the NanoMagsat constellation, consisting of one near-polar and two low-inclination satellite, is in the pipeline as an ESA Scout mission for launch within the next few years. In this presentation, we will report on the status and future plans for the Swarm mission, including opportunities for cross-mission data calibration and validation, joint analysis of multi-spacecraft data, and plans for the upcoming years when low-altitude data will also allow for improved characterization of the lithospheric magnetic field.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Supporting open science with VirES and SwarmPAL

Authors: Ashley Smith
Affiliations: University Of Edinburgh
As the Swarm mission enters its second decade of operations, we face increasing complexity and ambition. This comes from different directions: serving a greater number of higher level data products; more software tools and on-demand processing; growing importance of system and complexity science; and more spacecraft - both in the form of utilising platform magnetometers on other LEO missions, and synergy with related new missions such as the Macau Science Satellites and the NanoMagSat ESA Scout mission. The coordinated development of open source software is critical to tackling these challenges in a sustainable and collaborative manner. ESA has supported the development of the VirES system to aid in disseminating Swarm products (see related presentation "VirES: Data and model access for the Swarm mission and beyond"). The service has been instrumental in making Swarm more accessible by providing unified interfaces to the complex product portfolio. As well as data access, VirES can provide auxiliary information such as magnetic coordinates computed on demand, as well as forward evaluation of geomagnetic models. Such calculations can be performed on the server, with the implementation details hidden from the user. For more in-depth processing, such as deriving higher-level products, more flexibility is needed and is often more appropriate to perform on the client side. To this end, we are developing a Python package, SwarmPAL: the Swarm Product Algorithm Laboratory, as a home for higher level analysis code. SwarmPAL enables algorithms to be applied to data both from VirES or from any HAPI server, and includes simple visualisations for quick views of input and output data. We are approaching data access and analysis across multiple layers - web-based GUIs, APIs, Python libraries, Jupyter notebooks - backed by infrastructure including a free-to-use JupyterHub. These are developed openly and foster more collaboration between scientists and software engineers, which is essential in enabling more open science.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Lessons Learnt From Building a DGRF 2020 Candidate Model (and Parent 2013-2024 Model) Entirely Based on Swarm ASM Experimental Vector Mode Data

Authors: Gauthier Hulot, Louis Chauvet, Robin Deborde, Jean-Michel Léger, Thomas Jager
Affiliations: Université Paris Cité, Institut De Physique Du Globe De Paris, CNRS, CEA-Leti, Université Grenoble Alpes, MINATEC
ESA Swarm satellites carry a magnetometry payload consisting of an absolute scalar magnetometer (ASM), a relative flux gate vector magnetometer (VFM), and a set of star trackers (STR). The primary role of the ASM is to provide precise 1 Hz absolute field intensity measurements, while the VFM and STR provide the additional data needed to accurately reconstruct the attitude of the vector field. This magnetometry payload has provided a remarkable set of nominal vector data, which has extensively been used for multiple investigations. Each ASM instrument, however, can also produce its own self-calibrated 1 Hz experimental vector data, or, when requested, 250 Hz scalar burst mode. Self-calibrated 1 Hz experimental vector data have routinely been produced ever since launch and are still run when the ASM instruments are not in burst mode. The availability of such an alternative source of calibrated magnetic vector data on board the Swarm satellites provides a unique opportunity to validate the nominal data of the mission, either by directly comparing VFM based nominal vector data with ASM experimental vector mode data or by building “twin” field models that can next also be compared. Here we report on the lessons learnt from such intercomparisons which we carried out in the process of building a DRGF 2020 candidate model in response to the IGRF 2025 call for candidate models. These comparisons revealed slight disagreements between both data sets even after correcting for the already well-known Sun-related thermoelectric effect that affects both instruments. This slight disagreement cannot be attributed to a similar effect and is best explained in terms of a subtle calibration issue affecting both instruments in opposite ways. We designed an empirical approach to independently “post-calibrate” each data set, and showed that once “post-calibrated”, both data sets are in significantly better agreement. This strategy was implemented to build a parent field model entirely based on “post-calibrated” ASM experimental vector mode data, which we used to propose our DRGF 2020 candidate model. This candidate model turns out to be in striking agreement with the recently released official DGRF 2020 model. This agreement is all the more remarkable that the many other models that went into the making of this official DRGF model made use of different data sets, either nominal Swarm vector field data or data from other satellites and ground observatories. This suggests that calibration strategies currently used on board missions such as Swarm could still possibly be improved to produce data sets of even better quality.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Ocean-induced magnetic field: Swarm data processing and field modelling experiments

Authors: Chris Finlay, C Kloss, R.M. Blangsbøll, N. Olsen, J. Velímský, O. Kureš, V. Ucekajová
Affiliations: DTU Space, Charles University Prague
The motions of the ocean through Earth’s core-generated magnetic field produce electrical currents that depend on the details of the ocean flow as well as on the ocean’s temperature and salinity, via its electrical conductivity.  Low-Earth orbit magnetic survey satellites such as the Swarm trio record magnetic fields resulting from the integrated effects of such motionally induced currents and their closure in the electrically conducting sold Earth.  These ocean-induced magnetic fields (OIMF) thus carry remote information on ocean flow dynamics, temperatures and salinity.   OIMF signals due to a number of ocean tidal components have now been convincingly extracted, but detection of the OIMF signal due to the more general ocean circulation has remained elusive.   In this contribution we present ongoing efforts in the Swarm for Ocean Dynamics project to detect the OIMF signal using observations made by the Swarm satellites. This involves (i) a scheme to correct as best as possible for other geomagnetic signals (from the core, crust, ionosphere and magnetosphere),   (ii) time-dependent field modelling with a focus on spherical harmonic degrees 15 to 30 and periods of 60 days up to 5 years and model regularization designed for studies of the OIFM (iii) post-processing filtering to highlight the OIMF signal.  We will presents results of experiments with both synthetic satellite data and real Swarm observations. A particular focus will be regions such as the Indian Ocean where strong OIMF signals are expected.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall E2)

Presentation: Large-scale ionosphere and magnetospheric currents during the May 2024 storm obtained from assimilation of magnetic ground and multi-satellite data

Authors: Dr. Alexander Grayver, Jingtao Min, Nils Olsen, Federico Munch, Ashley Smith
Affiliations: University Of Cologne, ETH Zurich, DTU Space, University of Edinburgh
We present a high-cadence (20 min) model of mid-latitude ionospheric and magnetospheric currents obtained using a novel geomagnetic modelling method based on the variational assimilation principle. We use both ground and multi-satellite magnetic data from Swarm, CryoSat-2, Grace-FO and Macau Space Satellite (MSS) missions to enable consistent separation of ionosphere and magnetosphere sources with an unprecedented space-time resolution. The data are fit to a set of spatial basis functions that represent solutions of governing Maxwell’s equations for electric currents in the ionosphere and magnetosphere parameterized with Spherical Harmonic modes. Using a prior 3-D subsurface conductivity model allows for a self-consistent co-modelling of secondary, internal, magnetic field components induced in the 3-D solid Earth and oceans. The resulting framework enables the retrieval of magnetic and electric fields across the model domain, facilitating analysis of ionosphere-magnetosphere interactions during all phases of the May 2024 storm, and supports the modelling of ground electric fields associated with space weather hazards.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Session: A.05.05 Tipping points and abrupt change in the Earth system

There are elements of the Earth system, including ecosystems, that can undergo rapid transition and reorganisation in response to small changes in forcings. This process is commonly known as crossing a tipping point. Such transitions may be abrupt and irreversible, and some could feedback to climate change, representing an uncertainty in projections of global warming. Their potentially severe outcomes at local scales - such as unprecedented weather, ecosystem loss, extreme temperatures and increased frequency of droughts and fires – may be particularly challenging for humans and other species to adapt to, worsening the risk that climate change poses. Combining satellite-based Earth Observation (EO) datasets with numerical model simulations is a promising avenue of research to investigate tipping elements. And a growing number of studies have applied tipping point theory to satellite time series to explore the changing resilience of tipping systems in the biosphere as an early earning indicator of approaching a tipping point. This session invites abstracts on tipping points and resilience studies based on or incorporating EO, as well as recommendations from modelling groups that can be taken up by the remote sensing community, for example on early warning signals, products needed for model assimilation or novel tipping systems to investigate further using EO.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Shifting Dynamics: Decoupling of Carbon and Water Cycles in the Amazon Rainforest

Authors: Sarah Worden, Dr. Sassan Saatchi, Dr. Nima Madani, Dr. Yan Yang
Affiliations: NASA Jet Propulsion Laboratory/ California Institute of Technology, UCLA / JIFRESSE, Ctrees
To survive, plants must employ a range of strategies under different conditions to balance carbon uptake for photosynthesis with water loss via transpiration. Across forest stands, these processes—represented by gross primary productivity (GPP) and evapotranspiration (ET), respectively—are key ecosystem fluxes that determine vegetation water use efficiency. These fluxes are generally assumed to be tightly coupled across all vegetation types as plants exchange water for carbon via stomatal (small pores at the leaf surface) conductance, and models often explicitly incorporate this coupling. Changes in these fluxes significantly affect the terrestrial biosphere’s capacity to respond and to influence future regional and global climate change. Here, we evaluate 40 years of the relationship between the gross primary production (GPP) and evapotranspiration (ET) across the Amazon Basin, representing carbon and water fluxes associated with photosynthesis and transpiration respectively. We show that 64% of the Amazon Basin exhibits weak (R<0.3) or negative (R<0) correlations between GPP and ET.. We verify these results using direct satellite measurements of photosynthesis from solar induced-fluorescence (SIF) measurements and ET calculated using water-balance measurements as well as using flux tower GEP and ET from the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) flux tower sites. To further refine our analysis, we examine GPP and ET correlations across regions categorized by average maximum cumulative water deficit (WD) levels. The areas with the weakest WD (i.e., the most water) display the weakest (or even negative) correlations, while the areas with the strongest WD displays the strongest correlations. When analyzing the seasonal behavior of GPP and ET across these WD categories, we additionally find that GPP and ET are strongly coupled seasonally in the strong WD region. Additionally, differences in the timing of seasonal increases (and decreases) in GPP and ET drive anti-correlations seen within the weaker WD regions. This is primarily due to larger seasonal variability in ET as GPP does not show large seasonal variability within the weaker WD regions. Finally, we demonstrate that GPP and ET coupling has weakened over time, with the most pronounced changes in the southwestern Amazon. This region has experienced long-term increases in VPD, severe droughts and increasing vulnerability over recent decades. Such changes in coupling strength may signal a decline in the ecosystem resilience under pressures from climate and land use changes. Understanding the relative contributions of photosynthesis, transpiration, and evaporation to this decoupling, along with distinguishing climate and anthropogenic drivers, is essential to better assess shifts in the dynamics of the Amazon carbon-climate system.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Suitability of Remotely Sensed Vegetation Indicators for CSD-based Resilience Analyses of Tropical Forests

Authors: Lana Blaschke, Sebastian Bathiany, Marina Hirota, Niklas Boers
Affiliations: Technical University Of Munich, Potsdam Institute for Climate Impact Research, Universidade Federal de Santa Catarina
Tropical forests are vital for climate change mitigation as carbon sinks. Yet, research suggests that climate change, deforestation and other human influences threaten these systems, potentially pushing them across a Tipping Point where the tropical vegetation might collapse into a low-treecover state. Signs for this trend are reductions of resilience defined as the system's capability to recover from perturbations. If resilience decreases, dynamical system theory implies that critical slowing down (CSD) induces changes in statistical properties such as variance and auto-correlation. This allows to indirectly examine resilience changes in the absence of observations of strong perturbations. Yet, deriving estimates of resilience changes based on CSD impose several assumptions on the system under observation. For tropical vegetation, it is not obvious that these assumptions are fulfilled. Moreover, the conditions of tropical rainforests pose difficulties on the observation of the vegetation. E.g, cloud cover, aerosols, and the dense vegetation hinder the reliable retrieval of vegetation indicators, especially from data gathered in the optical spectrum. This implies that the data might not be suitable for resilience analyses based on CSD and that theoretical estimators of resilience align with actual recovery rates. We investigate the different assumptions of CSD and test them on a diverse set of remotely sensed vegetation indicators. Thereby, we establish a framework that allows to select ideal combinations of theoretical estimators and vegetation indicators. Based on this selection, we assess resilience change of the tropical forests in recent years.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Tipping points in tidal wetland vulnerability: A multi-sensor, multi-scale forecasting approach

Authors: Rusty Feagin, Raymond Najjar, Wenzhe Jiao, Maria Herrmann, Joshua Lerner
Affiliations: Texas A&M University, Pennsylvania State University
Tidal wetlands are highly productive ecosystems and play disproportionately large roles in coastal biology and biogeochemical cycling relative to their small areas. However, they are also vulnerable to episodic disturbances and can rapidly transition from a terrestrial to aquatic state. While site-specific studies can identify the reasons for vegetative productivity loss, no existing methods can predict when a tidal wetland reaches a tipping point in time and then this tipping point propagates across broader spatial scales. We are using a remote sensing, multi-scale approach to (1) detect tipping points as tidal wetlands rapidly transition from a terrestrial to aquatic state, (2) detect the micro-tipping points that accumulate to precipitate a broader-scaled transition, and (3) forecast future tipping points in tidal wetland vulnerability before they happen. We are addressing these objectives across all tidal wetlands in the conterminous United States (CONUS), over the period 2000–2025. Our approach is detecting and identifying the tipping points in several off-the-shelf remote sensing products using a novel synthesis of Early Warning Signals (EWS) analysis and traditional ecosystem resilience metrics. We are using a gross primary productivity (GPP) dataset from the publicly available Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics (DAAC) and the Harmonized Landsat and Sentinel-2 (HLS) dataset. We are using the GPP dataset to identify historical tipping points across the CONUS at 250 m and 16 day resolution, and then exploring them with greater spatial and temporal detail using the HLS dataset (10–30 m and 2–3 day resolution). Using the knowledge gleaned from this work, we are then predicting future tipping points. With these datasets, we are also testing several hypotheses about tipping points. We have hypothesized that we can predict the timing of an approaching tipping point when (H1) the frequency of perturbations to tidal wetland productivity increases, (H2) the effect size (magnitude) of the productivity response increases, (H3) the return time for productivity recovery to an equilibrium condition increases, and (H4) the return time for productivity recovery, relative to the effect size, increases. We have additionally hypothesized that (H5) a sudden drop in vegetative cover and productivity at finer scales warns of a potential micro-tipping point origin, and (H6) an increasing spatial variance warns of cascading micro-tipping points that accumulate into coarser-scaled transitions. Our preliminary results suggest that several of these hypotheses are valid, but that others can be rejected. Moreover, while scientists often think about tipping points as causing negative ecosystem loss, we have found that change can also occur in a positive direction. In the case of our work, we have detected both tipping points that lead towards decreasing productivity and wetland loss, as well as tipping points that lead towards increasing productivity and wetland gain. The tipping points that we have found correlate reasonably well with disturbance frequency, but also with changes in several long-term meteorological trends. Our maps and analyses show spatial and temporal variability in tipping points across the CONUS, but also how micro-tipping points cascade to initiate broader scale change in tidal wetland cover. This NASA-supported work helps extend our current understanding of using remote sensing-based tipping point analyses in mixed aquatic and terrestrial ecosystems. This work also helps scientists to better link wetland carbon losses with broader-scale implications for ocean biogeochemistry.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Tipping Points in Southern Ocean Overturning

Authors: Rafael Catany, PhD Alessandro Silvano, Professor Hugues Goose, Professor Alberto Naveira Garabato, PhD Sarah Connors
Affiliations: Albavalor, University of Southampton, Université catholique de Louvain, ESA ECSAT
The Southern Ocean Overturning Circulation (SOOC) is a critical component of Earth's climate system, regulating oceanic heat and carbon uptake over decadal to millennial timescales and influencing global sea level rise. It consists of two main overturning cells: the upper and lower cells. In the upper branch, Subantarctic Mode Water (SAMW) and Antarctic Intermediate Water (AAIW) are subducted into intermediate depths (500–2000 m), contributing to the uptake and storage of over 70% of the anthropogenic heat and 40% of the carbon uptake from the atmosphere. This process regulates atmospheric CO2 over decadal to multidecadal timescales. Antarctic Bottom Water (AABW) forms near Antarctica and replenishes the abyssal layers of the ocean, allowing for long-term carbon storage. This mechanism stabilises Earth's climate and protects Antarctic glaciers and ice sheets from warm ocean waters, helping to reduce mass loss in most regions. A slowdown or collapse of the SOOC represents a tipping point with far-reaching consequences, including accelerated global warming, sea level rise from increased Antarctic Ice Sheet melting, and ecosystem loss due to reduced oxygen and nutrient supplies to the abyssal ocean. Despite its critical importance, understanding the SOOC and its tipping points remains limited due to the challenges of observing subsurface properties beneath sea ice and sparse in situ data in the remote Southern Ocean. Traditional models often have biases in simulating accurately the dynamics of Antarctic sea ice and AABW formation. This limitation makes it harder to identify early warning signals of potential tipping points. In this presentation, the Tipping Points in Southern Ocean Overturning (TiPSOO) project will be presented. TiPSOO addresses these challenges by employing advanced Earth Observation (EO) and modelling approaches to study the dynamics and vulnerabilities of the SOOC. TiPSOO leverages satellite data from the ESA Climate Change Initiative (CCI)—including temperature, salinity, sea ice, altimetry, and GRACE-derived mass changes—to detect critical variations in sea surface height and density. By integrating these data with idealised modelling experiments, TiPSOO seeks to identify early warning signals and collapse fingerprints associated with AABW formation and SOOC disruptions. The main objectives of TiPSOO are to evaluate how sea ice dynamics and freshwater fluxes affect the formation of AABW and enhance our scientific understanding of tipping points in the Southern Ocean. This project will also conduct feasibility studies using Earth Observation (EO) data to monitor changes in the SOOC. Additionally, TiPSOO will demonstrate innovative EO methods for detecting tipping points in SOOC and analysing changes in ABW formation over the past two decades. The findings from the TiPSOO project will significantly enhance our scientific understanding of tipping points in the Southern Ocean. This research will provide critical data and information for climate policymakers and strengthen the confidence of IPCC assessments. By identifying and quantifying the risks associated with the slowdown or collapse of the SOOC, TiPSOO aims to improve resilience in climate systems and support informed decision-making. The project's multidisciplinary approach, which combines satellite data, modelling expertise, and collaborative partnerships, ensures that we gain robust and actionable insights into one of the most pressing challenges in climate science.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: An Early Warning System for Tipping Points in the Greenland Ice Sheet and the North Atlantic Subpolar Gyre: Exploring the Edge of the Possible with AEROSTATS

Authors: Christine Gommenginger, David McCann, Adrien Martin, José Marquez Martinez, Samantha Lavender, Christian Buckingham, Alice Marzocchi, Louis Clément, Simon Josey
Affiliations: National Oceanography Centre, NOVELTIS, Radarmetrics, Pixalytics
The climate system is approaching dangerous tipping points, with the predicted collapse within decades of critical components like the Greenland Ice Sheet and the North Atlantic Subpolar Gyre posing severe risks to European weather and global climate stability. Despite the pressing need for early warning systems and robust predictive models, significant gaps persist between current observational capabilities and the data required to enhance climate forecasting. This disconnect hampers efforts to build confidence in climate predictions and implement effective mitigation and adaptation strategies. Earth-orbiting satellites and in situ observations provide valuable insights into broad-scale changes in the ocean, cryosphere and atmosphere. However, these systems often struggle to capture extreme events and small-scale processes in complex, dynamic regions such as sea ice margins. These regions play a crucial role in governing water, heat and momentum exchanges at the ocean-cryosphere-atmosphere interfaces that connect the Greenland Ice Sheet and the North Atlantic Subpolar Gyre. This paper introduces AEROSTATS (Aerial Experimental Remote sensing of Ocean Salinity, heaT, Advection, and Thermohaline Shifts), a UK-led, innovation-driven international initiative to demonstrate long-term, low-cost, low-carbon monitoring in the dynamic Greenland ocean-ice margins. Funded as a high-risk, forward-thinking project, AEROSTATS leverages autonomous platforms, airborne systems, spaceborne sensors, and high-resolution models and reanalyses to address critical observational challenges in this region. Central to the initiative is a groundbreaking field campaign in 2028, featuring an extensive deployment of autonomous sensors to provide year-round observations of total surface current vectors, winds, salinity, ocean colour, and sea surface temperature—key variables governing exchanges in these critical regions. By integrating multi-platform observations with high-resolution models, reanalysis data, and advanced digital tools like machine learning, AEROSTATS represents a major step-forward towards improving our understanding and predictive capability for tipping points. Starting in 2025, the five-year project is actively seeking collaborations that can amplify its impact, for example through coincident deployments of airborne demonstrators, High-Altitude Pseudo-Satellites, and autonomous aerial, surface or subsurface vehicles. AEROSTATS represents a transformative step in developing Earth Observation data to advance climate system understanding and build the robust monitoring systems needed to confidently forecast and mitigate the impacts of catastrophic climate tipping points.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall M1/M2)

Presentation: Earth Observations Reveal Mixing Anomalies and Regime Shifts in Dimictic Lakes

Authors: Elisa Calamita, Michael Brechbühler, Iestyn Woolway, Dr. Clément Albergel, Laura Carrea, Daniel Odermatt
Affiliations: Eawag, Swiss Federal Institute of Aquatic Science and Technology, University of Tübingen, Bangor University, European Space Agency Climate Office, University of Reading
Climate change significantly impacts lake ecosystems, driving responses that range from gradual adaptations to abrupt shifts in ecological states. These transitions, often triggered when lakes cross critical tipping points, can lead to profound modifications in established dynamics, cascading through ecosystem processes and affecting services to human well-being. Such shifts in lake behaviour can disrupt biodiversity, nutrient cycling, and water quality, with far-reaching implications for ecological stability and societal reliance on these systems. Despite their critical importance, a comprehensive understanding of climate-induced lake shifts remains limited, largely due to a lack of systematic global data. To address this gap, we conducted a literature review focusing on climate-related lake shifts and explored the contributions of satellite Earth Observation (EO) in this research domain. Our analysis revealed that only 9% of studies on lake shifts utilize EO data, though its application has grown since 2012. EO data is most commonly used to assess shifts in surface extent, ice coverage, or phytoplankton phenology. Beyond direct observations, EO data can also provide indirect insights into processes such as the vertical mixing of lake water, which can be inferred from surface thermal patterns. To prove this, we tried using EO only to detect and study mixing regime shifts. Mixing regimes regulate nutrient distribution, energy flow, and oxygen levels in lakes. Specifically, we investigated dimictic lakes, which typically stratify during summer and exhibit inverse stratification in winter. Under warming conditions, these lakes are increasingly at risk of shifting to a monomictic regime, where winter stratification fails, and fall mixing continues until spring. Such regime shifts can disrupt nutrient cycling and oxygen dynamics, with severe ecological consequences. To track mixing anomalies, we utilized satellite-derived lake surface water temperatures and a thermal front tracking method to identify patterns indicative of failed winter stratification. By analyzing global EO data from 2000 to 2022, we present the first comprehensive assessment of mixing anomalies in dimictic lakes. Our results demonstrate that spatial gradients in EO data are effective for detecting these anomalies on a global scale. Moreover, we found that lakes that exhibit higher frequencies of mixing anomalies are more susceptible to regime shifts under ongoing climate warming. Our findings highlight the potential of EO as a tool for early detection and monitoring of lake ecosystem shifts. Although EO data lacks intrinsic predictive capabilities, its ability to identify lakes prone to mixing regime shifts underscores its utility as an early warning system. We propose a susceptibility index based on the statistics of the winter stratification length over the past two decades, showing a positive correlation between the number of mixing anomalies and the likelihood of future regime shifts. By identifying lakes experiencing mixing anomalies, EO data can play a pivotal role in monitoring ecosystem stability and anticipating the impacts of climate change on global lake systems. This approach offers a pathway to enhance adaptive management and conservation efforts for freshwater ecosystems.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Session: A.05.01 Using earth observation to assess climate change in cities

The Intergovernmental Panel for Climate Change (IPCC) Sixth Assessment report concluded that "Evidence from urban and rural settlements is unequivocal; climate impacts are felt disproportionately in urban communities, with the most economically and socially marginalised being most affected (high confidence)." (IPCC, WG2, Chapter 6)

In its Seventh Assessment Cycle, the IPCC will produce a Special Report on Climate Change and Cities to further develop the role of climate and its interactions with the urban environment. The report will cover topics that include:
- Biophysical climate changes;
- Impacts and risks, including losses and damages and compounding and cascading aspects;
- Sectoral development, adaptation, mitigation and responses to losses and damages;
- Energy and emissions;
- Governance, policy, institutions, planning and finance; and
- Civil society aspects.

This session calls for abstracts demonstrating how Earth Observation is being used to understand how climate change is impacting cities and how EO can be used to adapt and mitigate further climate change on the city scale. This session's abstracts should explicitly link the use of EO data and assessing their usefulness for small scale urban/cities information.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Analysis of Local Climate Zones and the Urban Heat Island through Geomatic Techniques: the Italy - Vietnam LCZ-UHI-GEO project

Authors: Prof Maria Antonia Brovelli, Mr. Matej Žgela, Alberto Vavassori, Deodato Tapete, Dr. Patrizia Sacco, Dr. Thy Pham Thi Mai, Dr. Nguyen Lam Dao
Affiliations: Politecnico di Milano, Agenzia Spaziale Italiana, Vietnam National Space Center
The localised temperature increase in urban areas compared to the surrounding rural or natural environments is known as the Urban Heat Island (UHI) phenomenon. The spatial and temporal distribution of temperatures varies within cities depending on many factors, including the morphology of built-up areas, the construction materials, and the presence and distribution of vegetation. The Local Climate Zone (LCZ) concept is a well-established system that classifies urban and suburban areas based on their physical and thermal characteristics. LCZ maps are commonly generated by processing multispectral satellite images, integrated with morphological information on the built environment and vegetation. Recently, the results from the project “Local Climate Zones in Open Data Cube” (LCZ-ODC), a collaboration between the Italian Space Agency (ASI) and Politecnico di Milano (POLIMI) – Agreement n. 2022-30-HH.0 – in the framework of ASI’s program “Innovation for Downstream Preparation for Science (I4DP_SCIENCE)”, have demonstrated that the use of hyperspectral images from ASI’s PRISMA mission can significantly improve the accuracy of LCZ maps compared to traditional multispectral Sentinel-2 images. Within such a project, a methodological procedure was proposed and implemented for generating LCZ maps using PRISMA and Sentinel-2 satellite images, integrated with multiple geospatial layers known as Urban Canopy Parameters (UCP), which describe the morphological characteristics of urban surfaces. This procedure was tested in the Metropolitan City of Milan, Italy, showing better performance in terms of map accuracy than the state-of-the-art LCZ Generator tool. Air temperature distribution and differences among the LCZs were also assessed through statistical tests, enabling the quantification of the maximum UHI intensity across different times of the day and seasons. Building upon the results of the LCZ-ODC project, the project “Analysis of Local Climate Zones and the Urban Heat Island through Geomatic Techniques” (LCZ-UHI-GEO) began in 2024 and is expected to conclude in 2026. The project is funded by the Italian Ministry of Foreign Affairs and International Cooperation (MAECI) and Vietnam’s Department of International Cooperation of the Ministry of Science and Technology (MOST). The project involves POLIMI, ASI, and the Vietnam National Space Center (VNSC). LCZ-UHI-GEO aims to replicate and expand the methodologies developed in LCZ-ODC and to test them comparatively in Italian and Vietnamese cities. This bilateral collaboration will facilitate the study of diverse urban climatic contexts and, by leveraging the integration of different expertise, will advance research into the correlation between LCZ maps and air temperature maps, generated from in-situ and satellite data. Regarding the study areas, the project focuses on two Italian cities, i.e., Milan and Rome, and two Vietnamese cities, i.e., Hanoi and Ho Chi Minh City, to test the scalability of the procedure in urban areas with a significantly different structure and extent, population density, terrain morphology, and background climate. For the LCZ classification, multi-temporal PRISMA images acquired specifically for the LCZ-UHI-GEO project will be used to map seasonal variations across the test areas. The acquisition plan is ongoing for all four cities, aiming to guarantee at least one usable image per season, weather permitting. Corresponding Sentinel-2 images will also be used for co-registration and comparative analysis. Multiple open geospatial data will also be used to calculate the UCPs. The geospatial data used within the LCZ-ODC project can be exploited for the Italian cities. For the Vietnamese cities, global datasets (e.g. JRC Global Human Settlement Layer Building Height) or, where available, more detailed local data will be used. Air temperature analysis will rely on official sources, such as regional agencies (e.g., ARPA Lombardia and ARPA Lazio for the Italian case studies), as well as crowdsourced air temperature observations (e.g., Netatmo). The use of crowdsourced data is meant to increase the spatial coverage of the observations, allowing us to compute and validate continuous air temperature maps. Additionally, the project fosters international collaboration between Italian and Vietnamese research teams by sharing knowledge and technical skills. In this context, LCZ-UHI-GEO aims to raise awareness about the UHI problem through workshops and training events, providing stakeholders with useful tools to implement mitigation strategies in the frame of, e.g., urban master plans and renewal projects. This activity has already started and first workshops have been held in Vietnam with representatives of public institutions and administrations, in order to identify user requirements to account for during the generation of LCZ maps and understand how the LCZ-UHI-GEO products may contribute to decision-making towards improved urban resilience to UHI.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Using Downscaled Geostationary Land Surface Temperature for a High Spatio-temporal Approach to Study Surface Urban Heat Islands

Authors: Alexandra Hurduc, Dr. Sofia Ermida, Dr. Carlos DaCamara
Affiliations: Instituto Português do Mar e da Atmosfera (IPMA), Instituto Dom Luiz, Faculdade de Ciências, Universidade de Lisboa
As urbanization has transformed the surface cover materials and cities have emerged and expanded, their influence on the environment also intensified. Thermal remote sensing is often used when evaluating land surface temperature (LST) at the most varied temporal and spatial scales. Although the uses of remote sensing LST observations have been countless, its utility relies on the characteristics of the sensor and weather these are adequate to represent the surface. Geostationary sensors provide sufficient observations throughout the day for a diurnal analysis of temperature, however, lack the spatial resolution needed for highly heterogeneous areas such as cities. Polar orbiting sensors have the advantage of a higher spatial resolution, enabling a better characterization of the surface while only providing one to two observations per day. A multi-layer perceptron (MLP) based method is used to downscale geostationary derived LST based on a polar orbiting derived one. The MLP consists of a classical approach to neural networks and was trained on a pixel-by-pixel basis. The rationale behind it relates to the complexity of relationships between surface variables over large areas. In the case of a unique model, the parameters are trained to best represent those relationships, finding the best compromise between model accuracy and generalization over a large area. This compromise may result in regional biases. Also, a large array of variables would be required to correctly represent the high complexity of the land surface (such as vegetation structure and health, surface materials and their heat capacity and emissivity, soil water content, amongst others). Most of these variables are not readily available at the desired spatio-temporal resolution, which means that there may not be enough information in the input data to obtain a good performance with a unique model. A pixel-wise model needs only to optimize training at the local scale, drastically decreasing the complexity needed by the model. This approach was used to downscale SEVIRI LST for the city of Madrid, from approximately 4.5 km to 750 m. The resulting dataset was used to assess the enhancement of the urban heat island (SUHI) effect during heat waves (HW) when compared to normal conditions. The increased spatial and temporal resolution allows a more detailed analysis of the impact of these extreme events within the city, providing information on the more susceptible areas of the city and the time-of-day when the SUHI is more intense. This information is crucial to support policy makers in developing prevention strategies to reduce the impact of HW on the city’s population.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Atlantic SENSE: towards an integrated geospatial intelligence solution

Authors: Caio Fonteles, Bruno Marques, Sofia Aguiar, Dr. Ana Oliveira
Affiliations: CoLAB +Atlantic
As we live in an era of big data acquisition—satellite, in-situ, wearables—climate change and environmental risks have become much easier to map. On the other hand, domain knowledge is usually supplied by the academic sector, offering novel methodologies for hazard mapping and predictions, albeit being hard to translate those scientific-driven findings for the public administration and society at large. Hence, public policies and public domain knowledge, including the implementation and monitoring of regulatory frameworks, often lag behind the scientific state-of-the-art. As such, citizens are left ‘in the dark’ about the environmental or climatic risks surrounding them, even though about 40% of the world’s population lives within 100km of the coast, subject to sea level rise, or exposed to other weather and climate extremes such as heatwaves and droughts. Furthermore, the pressure for further urbanisation and the efforts to preserve its rich natural capital are often at odds. Atlantic SENSE builds upon these notions to leverage the state-of-the-art scientific knowledge on data acquisition, machine learning (ML), and metocean predictions to address the key environmental and climatic challenges we face, to become a live platform with real-time natural hazards and risks information, readily available to the community. The main objectives of the work are: OBJ-1: Offer an integrated geospatial information web-based tool for municipalities and citizens. OBJ-2: Translate geospatial and in-situ data into impact indicators on multiple climate and environmental hazards. OBJ-3: Ensure scalability, transparency, and affordability of the results. Building upon the results of several projects and initiatives, such as Horizon Europe (EC), Destination Earth (ECMWF and ESA) and EU Digital Twin Ocean (Mercator Ocean International), the proof-of-concept of the Atlantic SENSE concept has been deployed over mainland Portugal. Furthermore, in the scope of the PRR New Space Portugal Agenda, a participatory approach with early adopters has kick-started to ensure fitness for purpose. Currently, several modules are already operational, and being tested: AIR: temperature extremes health indicators, urban heat island forecast and scenarios, air quality monitoring LAND: land use/land cover change monitoring, ecosystem services COAST: coastal erosion monitoring, coastline evolution, sea level rise scenarios OCEAN: physics and biogeochemical forecasts of ocean health indicators such as marine heatwaves. The resulting product is a Geospatial Multi Hazards Information System, based on data fusion between EO imagery and altimetry, in-situ measurements (including IoT and other traditional sensors) and model data that delivers weather and climate-related risk maps pertaining to these Earth Climate System domains, integrated into a geospatial visualization web-based tool for multi-criteria analysis, with querying options to benchmark risk profiles across neighbourhoods, as well as at the municipal level. With this, we hope to bridge the gap between the international push towards the adoption of a Global Goal on Adaptation (in agreement with the Sendai Framework and Early Warnings for All initiatives) and the regional and local capacity to respond to climate change in a cost-efficient but scientifically accurate manner.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Development Through EO and Natural Experiments: the UDENE Project and its case studies

Authors: Maria Antonia Brovelli, Lorenzo Amici, Dr. Vasil Yordanov, Nikola Obrenović, Branislav Pejak, Onur Lenk, Yücel Erbay, Mohamed Rahji, Ahmed El Fadhel, Anaïs Guy, Murat Ozbayoglu, Ali Turker, Maria Antonia Brovelli
Affiliations: Department of Civil and Environmental Engineering (DICA), Politecnico di Milano, BioSense Institute, University of Novi Sad, Istanbul University, Institute of Marine Sciences and Management, NiK Insaat Ticaret Ltd. Şti., Tunisian Space Association, Eurisy, Department of Artificial Intelligence, TOBB University of Economics and Technology, WeGlobal
The Urban Development Explorations using Natural Experiments (UDENE) project is an innovative initiative that combines Earth Observation (EO) technologies with urban planning to address critical challenges faced by cities today. Funded under the Horizon Europe program, this initiative aims to bridge critical gaps in the ability of urban planners, policymakers, and researchers to assess the impacts of urban interventions. By leveraging EO data from Copernicus satellites and linking it with structured local datasets organized as data cubes, UDENE provides a robust framework to enable evidence-based decisions. The project applies "natural experiments," defined as real-world changes or events analysed as if they are controlled experiments, offering unique insights into the causal effects of urban development policies. A key objective of UDENE involves structuring in-situ urban data into interoperable data cubes and integrating them into the Copernicus data cube federation. This linkage facilitates seamless exploration of urban development impacts across time and geographic locations. The project partners develop advanced sensitivity analysis algorithms to verify and operationalize multivariate causal models, enabling accurate predictions of how urban interventions influence critical outcomes. These outcomes include air quality, heat load, mobility, and resilience to natural hazards, such as earthquakes. UDENE also seeks to close the gap between high-level EO technologies and the practical needs of urban planners. At the core of UDENE’s framework are two tools tailored for practical use: the UDENE Exploration Tool and the UDENE Matchmaking Tool. The Exploration Tool allows urban planners, developers, and decision-makers to test, validate, and visualize their ideas. A user-friendly interface assesses impacts of urban development options on various metrics, such as air pollution, traffic patterns, and temperature regulation. Users explore potential outcomes of specific urban strategies by combining EO data and advanced causal models. The Matchmaking Tool links the Exploration Tool with existing EO products, services, and applications by identifying relevant downstream EO applications and service providers. The project’s objectives and tools demonstrate their utility through three use cases in Serbia, Tunisia, and Turkey. These case studies highlight the versatility of UDENE’s approach in addressing diverse urban challenges, ranging from environmental concerns to disaster preparedness. In Novi Sad, Serbia, the project analyses the environmental impacts of major transportation infrastructure changes. Two interventions are at the centre of this study: constructing bypass bridges to redirect heavy traffic and converting a central street into a pedestrian zone. While these changes aim to reduce air pollution and enhance mobility, they also raise concerns about potential congestion and disruption to existing transportation patterns. The case study integrates EO datasets, including Sentinel-5P data for tracking nitrogen dioxide (NO2) emissions, with in-situ traffic and air quality data from local monitoring stations. In addition, advanced regression and machine learning models estimate changes in air quality and assess causal relationships between traffic interventions and pollutant reductions. For the same goal at the microscopic level, agent-based simulations use inputs such as local demographic and transportation data to model traffic redistribution and identify its consequences to the pollutant emissions. In Greater Tunis, Tunisia, the study addresses the issue of urban heat islands (UHIs), where cities experience elevated temperatures compared to surrounding rural areas. The focus is on assessing the impact of a linked park system to mitigate heat loads across Local Climate Zones (LCZs). EO data is central to this analysis, with Sentinel-2 imagery providing vegetation indices like the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), and Landsat missions supplying land surface temperature (LST) data. Urban spectral indices, such as the Normalized Difference Built-up Index (NDBI), quantify the extent of built-up areas and their impact on heat distribution. These EO datasets combine with in-situ temperature, humidity, and wind measurements to evaluate the cooling effects of green spaces and assess the park system efficiency. Random Forest regression models explore the relationships between LCZ characteristics, green infrastructure, and heat mitigation. In Istanbul, Türkiye, the focus is on assessing the resilience of high-rise districts to possible seismic risks. Located near the active west segment of the North Anatolian Fault Zone (NAFZ), the study area includes Kadıköy, Ataşehir, and Üsküdar districts, characterized by dense urbanization and numerous high-rise buildings. As part of the pre-earthquake scenario, the use case simulates the impacts of a potential Mw ≥ 7.0 earthquake, estimating building damage, casualties, and economic losses through the implementation of models associated with Earthquake Loss Estimation Routines (ELER). High-resolution satellite imagery and local data from both private and public organizations as well as Copernicus Data Space Ecosystem are employed for land use and building information. The casualties are estimated through the models, such as those developed under the HAZUS framework considering building type, damage and injury severity parameters and they are integrated through ground motion prediction equations. To validate the results, it is anticipated to compare the outputs with the damage assessments from recent earthquakes, such as the 2023 Kahramanmaraş earthquakes. By combining EO technologies with local seismic and building data, this use case offers insights into how urban development policies enhance disaster preparedness. In addition to these case studies, UDENE emphasizes collaboration and partnership-building by actively involving European and non-European stakeholders from the public and private sectors with the aim to enhance the usability and scalability of EO technologies. By addressing challenges like climate change adaptation, air quality improvement, and disaster resilience, UDENE’s aims and objectives align closely with global ones, such as the UN Sustainable Development Goals (SDGs). The project’s contributions are not only academic but also provide practical solutions for cities worldwide.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Nighttime Temperature Trends Derived from 20 Years of ESA-CCI LST Data

Authors: Panagiotis Sismanidis, Benjamin Bechtel, Marzie Naserikia, Negin Nazarian, Melissa Hart, Iphigenia Keramitsoglou, Darren Ghent
Affiliations: Institute of Geography, Ruhr University Bochum, Australian Research Council Centre of Excellence for Climate Extremes, University of New South Wales, School of Built Environment, University of New South Wales, Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, National Centre for Earth Observation, Department of Physics and Astronomy, University of Leicester, Australian Research Council Centre of Excellence for 21st Century Weather, University of Tasmania, Climate Change Research Centre, University of New South Wales, Australian Research Council Centre of Excellence for 21st Century Weather, University of New South Wales
Cities are generally warmer than their surroundings. This phenomenon is known as the Urban Heat Island (UHI) and is one of the clearest examples of human-induced climate modification. UHIs increase the cooling energy demand, aggravate the feeling of thermal discomfort, and influence air quality. As such, they impact the health and welfare of the urban population and increase the carbon footprint of cities. The relative warmth of the urban atmosphere, surface, and substrate leads to four distinct UHI types that are governed by a different mix of physical processes. These four types are the canopy layer, boundary layer, surface, and subsurface UHI. Surface UHIs (SUHI) result from modifications of the surface energy balance at urban facets, canyons, and neighborhoods. They exhibit complex spatial and temporal patterns that are strongly related to land cover and are usually estimated from remotely-sensed Land Surface Temperature (LST) data. In the context of ESA’ Climate Change Initiative LST project (LST_cci) we investigate how the LST of cities has changed over the last ~20 years (2002-2019) using nighttime data from Aqua MODIS. We focus on nighttime conditions when the agreement between the LST and the near-surface air temperature over cities is strongest. Our results reveal a consistent warming trend across all cities, that is on average (± SD) equal to 0.06 ± 0.02 K/year. Cities located in continental climates exhibit the most pronounced warming, of about 0.08 K/year, while those in tropical climates the least (~0.04 K /year). Our results also suggest that the cities in the Northern Hemisphere warm faster than cities in the Southern and that the cities with the strongest increase in nighttime LST are all concentrated in Middle East, where we estimated trends as high as 0.15 K/year (Doha, Qatar).
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.94/0.95)

Presentation: Urban Air Temperature Prediction Leveraging Machine Learning and Remote Sensing Technologies

Authors: Lorenzo Innocenti, Giacomo Blanco, Mr. Luca Barco, Claudio
Affiliations: LINKS Foundation
Urban heat islands (UHIs) pose significant challenges to environmental sustainability and public health, manifesting as localized areas within a city where temperatures are significantly higher compared to their surroundings. These thermal hotspots amplify energy consumption, escalate health risks, and stress urban infrastructure. Addressing these challenges necessitates predictive tools capable of delivering precise temperature forecasts to support urban planning and policy decisions. Despite satellite-based land surface temperature (LST) monitoring's potential, existing data from the ESA Copernicus Sentinel-3 mission are constrained by two critical limitations: inadequate spatial resolution for urban-scale thermal differentiation, with bidaily LST measurements at a resolution of 1 km per pixel, and the fundamental disparity between land surface and air temperatures. While higher resolution LST satellites, such as Landsat 8, exist, they offer a coarser temporal resolution, with data available every 16 days. This research introduces a machine learning model designed to predict maximum daily air temperatures at a high spatial resolution of 20 meters per pixel. This resolution is sufficient to understand temperature dynamics within the city, allowing for the recognition of temperature differences between individual city blocks. For each day the inference is run, the model produces a seven-day temperature forecast into the future. Our technology utilizes a visual transformer-based architecture, which distinguishes itself by being more compact and computationally efficient than traditional convolutional neural networks (CNNs), achieving a mean absolute error (MAE) of 2°C across seven-day temperature predictions for three major European cities. The model takes as input multiple remote sensing and weather forecast data. The first input is the aforementioned LST data from the Sentinel-3 satellite constellation, in particular the morning passage data, collected between the hours of 9 AM and 11 AM. The model also takes data from Sentinel-2, again from the Copernicus program, which offers a high spatial resolution of 10 to 60 meters, and a temporal resolution of five days. In particular, Normalized Differential Vegetation Index (NDVI) is used, calculated using the red and near-infrared bands, which are sensitive to vegetation health and density. To ensure data quality and minimize cloud interference, the median value of the monthly measurements with cloud cover less than 10% is used. The meteorological data utilized in this study originates from the Visual Crossing provider, using their Visual Crossing Weather data service, which incorporate variables such as forecasted temperature, pressure, humidity, wind, and others. Regarding topographic data, two sources have been utilized. The first is the Digital Elevation Model (DEM), which provides information on the terrain's altitude. The second source is the Copernicus Urban Atlas, which classifies land use in urban environments into 27 distinct classes. All input data is resized to the required dimensions and combined into a single 3D tensor for the model. For LC data, it is transformed from 27 classes into four broader categories and processed into a four-channel matrix, where each pixel's value represents the percentage of the class present within that pixel. To incorporate temporal context, circular encoding is used for the day of the year, day of the week, and time of day of the Sentinel-3 passage. The data is utilized by stacking all inputs except for the weather data, which is combined with the weather data for the day being predicted, and passed to the model. This process is repeated for each of the seven days to generate the seven temperature predictions. The temperature measurements, which are used as the target for the ML training, have been sourced from the Weather Underground temperature crowdsourcing portal. The stations provide temperature data at intervals ranging from 3 to 10 minutes, depending on the specific location. This data is processed in a 2D matrix composed of pixels with values equal to the average of the maximum temperature recorded by each station within the area covered by the pixel for that day. If no station is active in the area, the pixel is marked as invalid. For each valid pixel, the mean squared error (MSE) loss between the predicted temperature from the model and the ground truth is computed and used to update the model weights. An image-to-image regression neural network architecture is used to translate these multidimensional inputs into a set of two-dimensional temperature maps. The architecture features an encoder-decoder structure, where the encoder extracts hierarchical features from the input data and the decoder reconstructs the spatial information. The chosen encoder is a Mixed Transformer model (MiT), which features attention blocks as the main computational units and convolutional layers for downsampling stages, as a lighter and more efficient alternative to CNN-based ones. The decoder reconstructs this information using a simple cascade of convolution-upsample blocks, incorporating higher resolution features via skip connections. The model is embedded within a continuous processing pipeline designed for uninterrupted operation. Its daily workflow automatically retrieves data, performs preprocessing, and generates temperature mappings. Seven-day temperature forecasts are uploaded to a geospatial dashboard, presenting predictions as overlays on the target urban landscapes. This solution is currently implemented within UP2030 (Urban Planning and Design Ready for 2030), a project supported by the European Union's Horizon Europe research and innovation program, which seeks to guide cities through socio-technical transitions aligned with climate neutrality objectives. By integrating this temperature forecasting service into urban planning frameworks, the project offers a tool for mitigating urban heat island impacts and fostering sustainable urban development. This work was funded in part by the Horizon Europe project UP2030 (grant agreement n.101096405) and in part by the Project NODES through the MUR—M4C2 1.5 of PNRR  Grant ECS00000036.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.14)

Session: A.01.12 EE9 FORUM - WHAFFFERS campaign networking event

The WHAFFFERS (W-band, HiSRAMS, AERI, FIRMOS, FINESSE, and FIRR-2 Experiment on Remote Sensing) is the largest effort to date to bring together far-infrared (FIR), mid-infrared (MIR), and microwave instrumentation (MW) that mimic aspects of FORUM’s formation flying with MetOp-SG’s IASI-NG and MWS. The atmosphere and snow surface were simultaneously, characterised by a suite of independent ground-based and airborne instrumentation, i.e. lidar, radar, microwave (MW), aircraft in-situ ice cloud measurements, soundings for temperature and water vapour profiles together with micro- and macrophysics of ice and water clouds, and snow grain size to help interpret surface snow emissivity.
This campaign is a joint endeavour between ESA, NASA, the National Research Council (NRC) Canada, ECCC, CNR Italy, McGill University, Université du Québec à Montréal, and Imperial College London. The campaign took place at the ground stations at Ottawa Airport and the Gault Nature Reserve close to Montreal, with overflights of the instrumented NRC Convair-580 research aircraft, in the January/ February 2025 timeframe.
The objectives of the campaign are to support the development of FORUM by: 1) Radiative closure experiments in clear and cloudy conditions, 2) Retrieval information content analysis (Far infrared, FIR only, FIR+MIR, FIR+MW,…) 3) Snow and ice emissivity assessment.
WHAFFFERS addresses the FORUM scientific development by creating a benchmark data set for: 1) assessment of FORUM retrievals, 2) community on-boarding through provision of data, 3) and validation preparation.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Session: C.05.04 Landsat Program and Science Applications

Landsat satellites have been providing continuous monitoring of the Earth’s surface since 1972. The free and open data policy of the Landsat program enables the global land imaging user community to explore the entire 52-year long-term data record to advance our scientific knowledge and explore innovative use of remote sensing data to support a variety of science applications. This session will focus on Landsat mission collaboration and related data and science applications of Landsat data and products that provide societal benefits, and efforts by European and U.S. agencies to maximize their benefits alongside comparable European land imaging missions such as Copernicus-Sentinel 2.

A diverse set of multi-modal science applications has been enabled with Landsat and Sentinel-2 harmonization and fusion with SAR, LiDAR, high-resolution commercial imagery, and hyperspectral imagery among others. Rapid progress has been achieved using the entire Landsat archive with access to high-end cloud computing resources. Landsat data and applications have revealed impacts from humans and climate change across the globe in land-cover, land-use, agriculture, forestry, aquatic and cryosphere systems.

Building on the 52+ year legacy and informed by broad user community needs, Landsat Next’s enhanced temporal (6-day revisit), spatial (10 – 60 m), and superspectral (21 visible to shortwave infrared and 5 thermal bands) resolution will provide new avenues for scientific discovery. This session will provide updates on Landsat missions and products, and collaboration activities with international partners on mission planning, data access, and science and applications development.

We invite presentations that demonstrate international collaboration and science advancements on the above topics. We also invite presentations on innovative uses of Landsat data alone or in combination with other Earth observation data modalities that meet societal needs today and in coming decades.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Global Scale Deforestation Monitoring for Seasonal and Deciduous Forests Using Sentinel-2 and Landsat

Authors: Vincent Schut, PhD Luca Foresta, Berto Booijink, PhD Niels Anders, Niklas Pfeffer, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Deforestation monitoring by the private sector has over the past year mainly focused on deforestation in Tropical Moist Forest, as most of the focus on deforestation monitoring was around palm oil, cocoa and soy in Latin America. Since the announcement of the EUDR, however, the demand for satellite deforestation monitoring solutions for the other EUDR commodities such as rubber, coffee, wood, has increased. With expansion to these commodities, also comes a shift in the forest types that need to be monitored. Commodities like coffee are grown in areas of dry tropical forests and wood products in more temperate regions with deciduous forests. The change of forest types in many cases also means that some change detection algorithms are no longer effective because these classify leaf-off periods as being deforestation. A misclassification especially prone to happen in optical data such as Sentinel-2 and Landsat. We present a new SpatioTemporalAdaptiveBareness (STAB) methodology, which uses a combination of Landsat and Sentinel-2 data to determine whether pixels were deforested. First, the seasonality (STAB Factors) of an area is calculated based on a period of 6 years preceding the monitoring period. These STAB Factors model the temporal (seasonal) behaviour of the bareness per pixel related to the regional "reference bareness" of surrounding forest(like) areas, where bareness is an index based on the SWIR and NIR bands, and reference bareness is the median of the bareness for all forest-like pixels within an entire Landsat or Sentinel-2 scene. During dry or leaf-off periods, this reference bareness will be high for the entire scene, while during a wet season where vegetation is green, it will be low. The reference bareness thus represents the overall regional seasonality, and the STAB Factors model if and how a single pixel's bareness values follow that regional seasonality, or not. Second, during the monitoring period, an expected per-pixel bareness can be calculated by applying the STAB Factors to the current reference bareness. For each pixel then the SpatioTemporal Adaptive Bareness value is calculated by dividing the pixel's raw bareness by the expected bareness. Whenever this ratio crosses a certain threshold, the pixel is flagged as being deforested. The change detection algorithm has successfully detected changes in moist tropical forest, dry tropical forests and savanna type woodlands such as the Cerrado and Chaco areas in Latin America. The algorithm has been successfully applied at continental scale in Latin America, Africa and Asia. We will show examples of the change detection, and compare to other (open) data available.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Do We Really Have Enough Data for Long-term Analyses: Deep Dive Into Global Per-pixel Availability of Usable Landsat and Sentinel-2 Data

Authors: Dr. Katarzyna Ewa Lewińska, Stefan Ernst, Dr. rer. nat David Frantz, Prof. Dr. rer. nat. Ulf Leser, Patrick Hostert
Affiliations: Geography Department, Humboldt-Universität zu Berlin, Department of Forest and Wildlife Ecology, University of Wisconsin-Madison, Geoinformatics –Spatial Data Science, Trier University, Department of Mathematics and Computer Science, Humboldt-Universität zu Berlin, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt-Universität zu Berlin
The Landsat data archive provides over 40 years of medium-resolution multispectral data, making it the longest, continuously running Earth observation program and one of the most commonly used data sources for long-term analyses. Free and open access to Landsat data combined with technological advancements in data storage and processing capabilities have facilitated the development of new algorithms and approaches that utilize dense time series of satellite observations. The shift towards using all available data has unlocked new monitoring capacities. Yet, the aptness and accuracy of any analysis hinges on the availability and quality of the data. Landsat historical global data coverage is extremely variable due to the limitations and priorities of past missions. Although the availability of Landsat products is well known on a per-tile basis, users lack easily accessible and quarriable information on net data availability at per-pixel level, which is driven by cloudiness, cloud shadows, snow, and other highly variable disturbances. The aggregated information on data availability for different time windows is not routinely and accessibly provided at the per-pixel level for Landsat data holdings. The same limited overview of usable data applies to Sentinel-2, which is nowadays frequently used in synergy with Landsat to capitalize on more frequent data acquisition. This implies that each study based on Landsat or Sentinel-2 data needs to a priori quantify the availability of cloud-, shade-, and snow free data specific to their area and time period of interest to understand the limitations of analytical approaches and to correctly parametrize algorithms. Critically, increasing accessibility of interpolation and data reconstruction approaches, often seamlessly incorporated into data cloud processing environments and software, allow for extensive augmentation of missing data with unquestioned confidence and accuracy without any per pixel quality information, creating ‘perfect’ time series, but simultaneously potentially jeopardizing the quality of the final results. Convergent with the growing popularity of machine learning and deep learning applications and statistical classifiers this could have a detrimental effect on the credibility of results. To improve the discoverability of the per-pixel data availability and to demonstrate the opportunities and limitations in the 1982 2024 Landsat and 2015-2024 Sentinel-2 archives, we performed a systematic global pixel-based assessment of usable data (i.e., cloud-, shade-, and snow-free) across both data holdings. Using a global 0.18° sampling scheme, our overview highlights differences in data availability evaluating region-specific limits for time series analyses. Importantly, we focus on both the feasibility of long- and medium‑term analysis windows, as well as annual and sub-annual data availability analyzing data as available in the archives, and assuming different interpolation extensity. To ensure wide usability, we provide insights on how time series densities vary when using Landsat and Sentinel-2 data separately and jointly. Finally, we determine whether increased data availability after 2014 and combining Landsat and Sentinel-2 data have an impact on long term trends in NDVI (Normalized Difference Vegetation Index), which is commonly used in studies on vegetation greening and implications of climate change. Overall, our results evaluate and highlight spatio‑temporal heterogeneity in the availability of two critical environmental satellite missions, draw attention to the feasibility of analyses in some regions and over specific time periods including the most recent years, as well as challenge the quality of results of some land cover and land use analyses and applications. We accordingly emphasize the importance of analysis-specific thorough data availability evaluation and critical assessment of the applicability of algorithms of choice. Specifically, our results provide an urgently needed perspective on data availability-driven limitations and opportunities for analyses based on Landsat and Sentinel-2 data archives, which will continue to be at play in the future with the next mission building on the existing legacy. To ensure maximum usability and impact, our data availability dataset is freely available to the community as a sharable dataset and through a cloud-based interactively-browsable interface.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Toward operational Landsat aquatic reflectance science products for advancing global inland water and coastal ocean observations

Authors: Benjamin Page, Christopher Crawford, Danika Wellington, Saeed Arab, Gail Schmidt, Chris Barnes
Affiliations: Earth Space Technology Services (ESTS), U.S. Geological Survey (USGS), KBR, Inc., Earth Resources Observation and Science (EROS) Center
In April 2020, the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center released provisional Aquatic Reflectance (AR) science products for the Landsat 8 Operational Land Imager (OLI) via EROS Science Processing Architecture (ESPA) On Demand Interface. This was envisioned as an expansion to the current suite of Landsat Level-2 atmospherically corrected products. Landsat science products that are considered provisional are accessible to the public but are actively under USGS evaluation and remote sensing community validation. Provisional algorithms and the generated outputs may undergo further modifications, improvements, or design before being considered as operationally ready for scientific use. The release of the Landsat 8 and Landsat 9 provisional AR products marks the first step toward structuring a standardized processing pathway designed to produce AR measurements from OLI’s 30-meter spatial resolution image data. This initiative is anticipated to enhance aquatic science and authoritative environmental monitoring efforts, particularly in the areas of coastal mapping and lake management practices. The Science Algorithms to Operations (SATO) process for USGS Landsat data products enables smooth transition of researched, developed, and matured science algorithms from a provisional state into operational readiness. The Sea-viewing Wide Field-of-view Sensor Data Analysis System (SeaDAS), developed by NASA’s Ocean Biology Processing Group (OBPG), has been the flagship atmospheric correction processor for generating Collection 1 and Collection 2 provisional AR products for Landsat 8 and Landsat 9. Here, it is evaluated within the parameters defined by the SATO process with the Landsat Science Office (LSO) whether it is the most optimal pathway for current, heritage, and upcoming Landsat missions in terms of suitability for emerging scientific needs and standards that require reliable analysis ready data for both inland and coastal water quality mapping applications. Preliminary assessments of alternative atmospheric correction solutions that can generate AR products from Landsat imagery suggest that there may be more suitable processing options for operational Landsat AR global science products. The purpose of this contribution is to communicate with aquatic scientists and the broader Earth observation community on the origins, requirements, challenges, successes, and objectives for standardizing global AR science data products for Landsat satellite missions.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Rapid glacier shrinkage on Baffin Island from 2000 to 2019 as observed from Landsat 8 and Sentinel-2

Authors: Dr. Frank Paul, Philipp Rastner
Affiliations: University of Zurich
The glaciers and ice caps on Baffin Island contribute substantially to sea-level rise, but glacier area was poorly constrained in the widely used RGI 6.0 (e.g. debris-covered regions were often missing and ice divides were at the wrong location) and area changes over the past two decades were basically unknown. To improve the situation, we have (a) created for the new RGI 7.0 a revised version of the RGI 6.0 outlines from ‘around the year 2000’ and (b) a new inventory for 2019 to perform change assessment. For the revised year 2000 inventory in RGI 7.0, the temporal coverage could be reduced from 52 (1958-2010) to 4 years (1999 to 2002) using Landsat 7 scenes and for 2019 to about one week using 12 Landsat 8 along with 3 Sentinel-2 scenes. We also substantially revised the ice divides by applying watershed analysis to the Arctic DEM in combination with maps of flow velocities from Sentinel-2. Topographic information for each glacier was also derived from the Arctic DEM. When excluding Barnes Ice Cap (which lost 117 km² or 2% of its area), the mapped glacier area is 29,781 km² in 2000 and 26,067 km² in 2019, i.e. a reduction of 3,715 km² or -12.5% (-0.66%/a). Thereby, relative area losses strongly increased towards smaller glaciers, reaching -75% for glaciers <0.1 km² and -25% (-1.3%/a) for all glaciers <10 km². Many ice caps disintegrated into smaller pieces and 2140 ice bodies (area 190 km²) melted away completely from 2000 to 2019. Apart from the methods and results obtained for the two inventories, we will also present the challenges of glacier mapping in this region (e.g. separating glaciers from attached ice patches or identifying debris-covered ice in shadow) along with the difficulties of calculating glacier-specific area changes from two inventories with a spatial mismatch.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: The NASA Harmonized Landsat and Sentinel-2 Version 2.0 surface reflectance dataset

Authors: Junchang Ju, Qiang Zhou, Brian Freitag, Madhu Sridhar, Christopher
Affiliations: University Of Maryland
The United States National Aeronautics and Space Administration (NASA) Harmonized Landsat and Sentinel-2 (HLS) project is entering its 10th year of offering 30-m surface reflectance data. The HLS project was initiated in early 2010s to produce more frequent land measurements by combining the observations from the US Landsat 8 Operational Land Imager (OLI) and the European Copernicus Sentinel-2A MultiSpectral Instrument (MSI), and currently from two OLI and two MSI sensors, by applying atmospheric correction to top-of-atmosphere (TOA) reflectance, masking out clouds and cloud shadows, normalizing bi-directional reflectance view angle effects, adjusting for sensor bandpass differences with the OLI as the reference, and providing the harmonized data in a common grid. Several versions of HLS datasets have been produced in the last ten years, and a new HLS dataset, tagged Version 2.0, as a result of the improvements on almost all the harmonization algorithms, was completed in the summer of 2023 and for the first time takes on a near global coverage (no Antarctica). The data harmonization efficacy was assessed by examining how the reflectance difference between contemporaneous Landsat and Sentinel-2 observations was successively reduced by each harmonization step, for 545 pairs of globally distributed same-day Landsat/Sentinel-2 image samples from 2021 to 2022. Compared to the TOA data, the HLS atmospheric correction slightly increased the reflectance relative difference between Landsat and Sentinel-2 for most of the spectral bands, especially for the two blue bands and the green bands. The subsequent bi-directional reflectance view angle effect normalization effectively reduced the between-sensor reflectance difference present in the atmospherically corrected data for all the spectral bands, and notably to a level below the TOA differences for the red, near-infrared (NIR), and the two shortwave infrared (SWIR) bands. The bandpass adjustment only had a modest effect on reducing the between-sensor reflectance difference. In the final HLS products, the same-day reflectance difference between Landsat and Sentinel-2 was below 4.2% for the red, NIR, and the two SWIR bands, all smaller than the difference in the TOA data. However, the between-sensor differences for the two blue and the green bands remain slightly higher than in TOA data, and this reflects the difficulty in accurately correcting for atmospheric effects in the shorter wavelength visible bands. The data consistency evaluation on a suite of commonly used vegetation indices (VI) calculated from the HLS V2.0 reflectance data showed that the between-sensor VI difference is below 4.5% for most of the indices. HLS is under continuous refinement. Additional updates include a global production of HLS VI data scheduled to start in early 2025, 10-day surface reflectance composite in prototyping, a 6-hour low-latency HLS production under development, and research in atmospheric correction and topographic correction improvement.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L1/L2)

Presentation: Landsat Next Programmatic Update

Authors: Timothy Newman
Affiliations: U.S. Geological Survey
The USGS National Land Imaging Program Coordinator, Timothy Newman, will provide a programmatic update on the Landsat Next mission. Landsat Next is the next-generation Landsat mission currently in development at NASA and USGS, intended to provide Landsat data continuity through the 2030s and beyond for hundreds of thousands of Landsat users in the United States and around the world and delivering billions of dollars of economic benefit annually. Landsat Next is projected to deliver better than twice the spectral, spatial and temporal resolution of Landsat 9, meeting the evolving needs of research and operational users across a host of applications, including crop health and production, tracking water use, documenting urban growth, mapping wildfires, monitoring the well-being of forests, assessing the impact of industrialization, and informing efforts to reduce hunger globally. The programmatic update will include a detailed description of the mission, including agency roles and responsibilities, the development history of the mission, and the latest programmatic schedules and milestones.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G2)

Session: C.05.06 Status ESA Mission development: National Programmes managed by ESA - PART 2

The status of development of ESA missions will be outlined
In 4 sessions a 1h30 minutes (equally of a full day) the unique opportunity for participants will be offered to gain valuable insights in technology developments and validation approaches used during the project phases of ongoing ESA programmes.
The Projects are in different phases (from early Phase A/B1 to launch) and together with industrial/science partners the status of activities related to Mission developments will be provided.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Session: A.07.08 Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling - PART 2

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Towards Operational Water Vapour Products from Optical Imager

Authors: Juergen Fischer, René Preusker
Affiliations: Spectral Earth GmbH, Free University Berlin
We present recent results of EUMETSAT’s SCIENTIFIC FRAMEWORK - OPERATIONAL WATER-VAPOUR PRODUCTS FROM OPTICAL IMAGERS. First, we focuss on the improvement of the scientific quality of the COWa Sentinel-3 OLCI Level-2 Total Column Water Vapour (TCWV) product and the operational generation of the TCWV products. The updated TCWV COWa algorithm considers the different spectral characteristics of OLCI-A and B and introduces a temporal evolution of the spectral models of each camera of both instruments. The retrieval is based on an Optimal Estimation, allowing a pixel- by-pixel retrieval diagnostic, including uncertainty and information content estimates. The COWa TCWV retrievals are compared with ground based GNSS (Ware et al. 2000) measurements, water vapour from AERONET (Pérez-Ramírez et al. 2014, Holben et al. 1998), water vapour from ground-based microwave radiometer at the Atmospheric Radiation Measurement (ARM) (Turner et al. 2003, Turner et al., 2007). An extensive validation exercise demonstrates the high performance of the COWa water vapour retrieval. We discuss a comparison of OLCI TCWV retrievals with ECWMF TCWV on a global scale over more than 4 years. We achieved seasonal pattern of TCWV observations and ECMWF analysis, which demonstrate impressively the potential impact of the use of OLCI- and other satellite TCWV observations in the assimilation for NWP and climate models.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Towards high resolution evaporation data integrating satellite observations and hybrid modelling

Authors: Diego Miralles, Oscar Baez Villanueva, Olivier Bonte, Joppe Massant, Fangzheng Ruan, Maximilian Söchting, Prof. Dr. Miguel Mahecha
Affiliations: Ghent University, Leipzig University
Terrestrial evaporation (E) is an essential climate variable linking the water, carbon, and energy cycles. It regulates precipitation and temperature, influences feedbacks from water vapor and clouds, and drives the occurrence of extreme events such as droughts, floods, and heatwaves. For water management, E represents a net loss of water resources, while in agriculture, transpiration determines irrigation demands. Despite its importance, global E estimates remain uncertain due to the limited availability of field measurements, the complex interplay of physiological and atmospheric processes, and challenges in capturing E through satellite observations. These gaps have driven innovation in modeling approaches that blend satellite data, in situ observations, and state-of-the-art algorithms. The fourth generation of the Global Land Evaporation Amsterdam Model (GLEAM4) enables the estimation of E and its components globally using a hybrid framework. The dataset spans 1980–2023 at a 0.1° resolution, offering improved representations of critical processes such as interception, atmospheric water demand, soil moisture dynamics, and groundwater access by plants. GLEAM4 integrates machine learning techniques to capture evaporative stress, leveraging eddy-covariance and sapflow data, while maintaining water balance and thermodynamic constraints. By reconciling the interpretability of physics-based models with the adaptability of machine learning, GLEAM4 provides a scalable solution for estimating E across ecosystems. Validation against hundreds of eddy-covariance sites demonstrates its robustness. Global land evaporation is estimated at 71 x 10³ km³ yr⁻¹, with 63% attributed to transpiration. In addition to E, the dataset provides complementary variables such as soil moisture, potential evaporation, sensible heat flux, and evaporative stress, facilitating diverse applications in hydrology, ecology, and climate science. Building upon GLEAM4, a new generation of high-resolution datasets is under development to meet the growing demand for actionable data in agriculture, water management, and climate adaptation. In this presentation, a 1-km resolution pilot dataset across Europe and Africa will be introduced, and its skill to capture the fine-scale dynamics of evaporation and soil moisture will be evaluated. Innovations include the assimilation of Sentinel-1 backscatter data to account for irrigation impacts, enabling precise evaporation estimates in agricultural regions, and the dynamic downscaling of radiation forcing using Land Surface Analysis Satellite Applications Facility (LSA SAF) and Moderate Resolution Imaging Spectroradiometer (MODIS) data. This high-resolution dataset will allow better characterization of droughts, heatwaves, and water resource distribution, particularly in regions vulnerable to climate variability, offering a valuable tool to manage water resources and mitigate climate impacts. Outputs from these efforts will be disseminated openly and include an interactive 3D data cube visualization, enabling timely access for researchers, policymakers, and stakeholders. This research is framed within the ESA Digital Twin Earth initiative and the Belgian Science Policy Office (BELSPO) STEREO IV programme.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: GIRAFE v1: A global precipitation climate data record from satellite data including uncertainty estimates

Authors: Marc Schroeder, Hannes Konrad, Anja Niedorf, Stephan Finkensieper, Remy Roca, Sophie Cloche, Giulia Panegrossi, Paolo Sano, Christopher Kidd, Rômulo Augusto Jucá Oliveira, Karsten Fennig, Madeleine Lemoine, Thomas Sikorski, Rainer Hollmann
Affiliations: Deutscher Wetterdienst, LEGOS, IPSL, CNR-ISAC, NASA/GFSC, Hydro Matters
We present a new precipitation climate data record (CDR), called GIRAFE (Global Interpolated Rainfall Estimation), which has recently been released by EUMETSAT’s Satellite Application Facility on Climate Monitoring (CM SAF). It covers a time period of 21 years (2002 – 2022) with global coverage, daily temporal resolution and 1° x 1° spatial resolution. GIRAFE is a completely satellite-based data record obtained by merging infrared (IR) data from geostationary satellites and passive microwave radiometers (PMW) onboard polar-orbiting satellites. Additional to daily accumulated and monthly mean precipitation, a sampling uncertainty at daily scale within the range of geostationary satellites (55°S - 55°N) is provided. The implementation of a continuous extension of GIRAFE via a so-called Interim CDR service has almost been completed and associated data will become available soon. For retrieving instantaneous rain rates from PMW observations, three different retrievals for microwave imagers (HOAPS) and sounders (PNPR-CLIM, being developed by CNR-ISAC in the C3S_312b_Lot1 Copernicus and PRPS) were used. Quantile mapping is applied to the instantaneous rain rates estimated from the observations by the 19 different PMW platforms to achieve stability over time. The IR observations from the geostationary satellites undergo a dedicated quality control procedure. The uncertainty estimation is based on decorrelation ranges from variograms in spatial and temporal dimensions. The merging of PMW and IR data and the technique for uncertainty estimation in GIRAFE is based on the methods of the Tropical Amount of Precipitation with an Estimate of ERrors (TAPEER) algorithms. Here, we present details on the GIRAFE algorithm and results of the quality assessment activity comprised of comparisons against other established global, regional and local precipitation products. A focus will be on the analysis of the homogeneity of the GIRAFE data record relative to a variety of reference data records. Finally, results from the analysis of the consistency between precipitation extremes and surface temperature are presented and discussed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Intercomparison of Earth Observation products for hyper-resolution hydrological modelling over Europe

Authors: Wouter A. Dorigo, Almudena García-García, Pietro Stradiotti, Federico Di Paolo, Paolo Filipppucci, Milan Fischer, Matěj Orság, Luca Brocca, Jian Peng, Alexander Gruber, Bram Droppers, Niko Wanders, Arjen Haag, Albrecht Weerts, Ehsan Modiri, Oldrich Rakovec, Félix Francés, Matteo Dall'Amico, Luis Samaniego
Affiliations: Helmholtz-zentrum Für Umweltforschung Gmbh - Ufz, Department of Geodesy and Geoinformation, TU Wien, Waterjade Srl, National Research Council of Italy, Research Institute for Geo-Hydrological Protection, Global Change Research Institute CAS, Department Computational Hydrosystems, UFZ - Helmholtz Centre for Environmental Research GmbH, University of Potsdam, Institute of Environmental Science and Geography, Department of Physical Geography, Utrecht University, Operational Water Management, Deltares, Hydrology and Environmental Hydraulics group, Wageningen University & Research, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Research Institute of Water and Environmental Engineering (IIAMA), Universitat Politècnica de València, Remote Sensing Centre for Earth System Research, Leipzig University
The increasing frequency and severity of hydrological extremes demands the development of early warning systems and effective adaptation and mitigation strategies. Such systems and strategies require high (spatial) resolution hydrological predictions, mostly provided by hydrological models. However, current state-of-the-art hydrological predictions remain limited in their spatial resolution. A proposed solution is the integration of high-resolution Earth observation (EO) products in hydrological modelling in order to reach hyper-resolution (approximately 1 km2). Nonetheless, proper use of these data in hydrological modelling requires a comprehensive characterisation of their uncertainties. Here, we present results from the 4DHydro project evaluating the performance of high-resolution EO products of four hydrological variables (precipitation, snow cover area, surface soil moisture, and evapotranspiration) against observational references. Two merged EO precipitation products at 1 km resolution (merged IMERG-SM2A and merged ERA5-IMERG-SM2A) reached correlation coefficients of more than 0.5 with the benchmark reference over most areas and are recommended for hyper-resolution hydrological modelling over Europe. The MODIS (resolution of 250 m) and Sentinel-2/Landsat-8 (resolution of 20 m) snow cover products showed the highest classification accuracy and were selected as best choice for the use of snow cover area products in hyper-resolution hydrological modelling. For surface soil moisture, the NSIDC SMAP product at 1 km resolution yielded correlation coefficients of more than 0.6 at most stations and is recommended for hyper-resolution hydrological modelling. Finally, the MODIS-Terra (MOD16A2) evaporation product at 500 m resolution, showing correlation coefficients higher than 0.8 at most eddy covariance towers, is recommended for the assimilation of ET in models. The assimilation of the proposed high-resolution products in models individually or in combination could improve the performance of hyper-resolution modelling. Still, developing integration workflows is required to overcome difficulties related to scale mismatches and data-gaps.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.15/1.16)

Presentation: Next Generation Hydrographic Mapping to Support Hyper-Resolution Hydrological Modelling Across Europe - The New EU-Hydro 2.0

Authors: Linda Moser, Bernhard Lehner, Achim Roth, Amelie Lindmayer, Guia Marie Mortel, Leena Julia Warmedinger, Stephanie Wegscheider, Günther Grill, Carolin Keller, Stefan Ram, Antje Wetzel, Maria Kampouraki, Jose Miguel Rubio Iglesias, Joanna Przystawska, Inés Ruiz, Veronica Sanz
Affiliations: GAF AG, Confluvio Consulting Inc., German Remote Sensing Data Center, German Aerospace Center (DLR), European Environment Agency (EEA)
EU-Hydro is a hydrographic reference dataset and part of the Copernicus Land Monitoring Service (CLMS) portfolio, implemented by the European Environment Agency (EEA). It offers detailed information on the geographical distribution and spatial characteristics of water resources throughout Europe, such as river networks, surface water bodies and watersheds. EU-Hydro was initially developed in 2012, with subsequent updates aimed at improving data accuracy and network topology. However, inconsistencies remained, allowing EU-Hydro to be used for mapping applications, whereas its use for hydrological modelling remains limited. It is currently being updated to produce an improved and upgraded version of this unique European reference dataset. Highlighting the importance of water mapping and modelling, the new version of EU-Hydro (EU-Hydro 2.0) shall tackle the requirements of a modern reference product within the pan-European hydrological domain, serving various use cases, such as: supporting water quality and availability analysis; runoff modelling or flood modelling and prediction; environmental assessments related to river connectivity or the evaluation of anthropogenic impacts. The coastline can serve as input for analytical purposes for various applications. Moreover, policy areas such as nature restoration and climate adaptation can be tackled, all with the goal to strengthen water resilience across Europe. The EU-Hydro 2.0 will build upon a latest generation Digital Elevation Model (DEM) to provide highly detailed and high-quality topographic input data: the Copernicus DEM, a pan-European DEM available at 10m resolution, based on the TanDEM-X mission, supported by the Copernicus DEM at 30m resolution for catchments upstream and downstream that flow in and out of the EEA38 +UK area (EU27 + European Free Trade Association (EFTA) + Western Balkans + Turkey + UK). The production of EU-Hydro 2.0 will involve the best possible ancillary data of hydrography, land cover, and infrastructure to allow seamless integration into the DEM editing process, as well as VHR satellite data for quality control and validation. The product suite consists of eight main layers: The three main raster products are the hydrologically conditioned DEM (Hydro-DEM), the Flow Direction (Hydro-DIR) and the Flow Accumulation (Hydro-ACC) maps, supported by additional raster layers for expert hydrological use. The five vector products are the river network (Hydro-NET), water bodies (Hydro-WBO), basins and sub-watersheds (Hydro-BAS), a product on artificial hydrographic structures (Hydro-ART) and a coastline (Hydro-COAST). First, the process of DEM editing and hydro-conditioning involves refining the DEM to ensure it accurately represents the water flow and natural hydrological features. This includes correcting artefacts that interfere with flow connectivity, such as bridges and dams, filling sinks in the DEM that are caused by inherent uncertainties, and adjusting elevation data to create a hydrologically consistent surface, which is critical for accurate water flow analysis and watershed delineation. In particular, novel methods are employed to remove noise and distortions from the DEM due to vegetation cover and urban build-up, to enforce flow paths using high-resolution cartographic layers of water surfaces, rivers and lakes, and to centre drainage lines in the middle of larger water bodies. In a next step, the raster layers (i.e., the flow direction and accumulation maps as well as further advanced hydrological layers) are derived from the hydrologically conditioned DEM surface, and subsequently, the vector layers Hydro-NET and Hydro-BAS are extracted from them. Ancillary data are needed to generate Hydro-WBO, Hydro-COAST and Hydro-ART, which can only be partially derived from the Hydro-DEM. To ensure a homogeneous approach across Europe regardless of geographical differences, and to find the best possible methodologies, the greatest expected challenges are a) the correct delineation of river networks and watershed boundaries in regions with mostly flat terrain, low topographic variation, and dense vegetation cover which affects DEM accuracy, such as large floodplains; b) the correct interpretation of flow topology in highly modified landscapes such as urban or irrigated areas where artificial canals can dominate over elevation-derived flow paths; and c) the correct detection and interpretation of special flow features such as inland depressions, underground flow connections in karst areas, or the complex structures of deltaic systems. Furthermore, inconsistencies among and within European countries related to the quality and completeness of available ancillary datasets used to improve the river and watershed delineations may introduce some regional differences in achievable accuracies. The derived raster layers will be a significant enhancement and novelty to EU-Hydro 2.0, alongside other key additions such as hierarchically nested watersheds, an updated coastline dataset, and detailed maps of water bodies and artificial structures, i.e., the vector products, all integrated within a topologically consistent river network. All layers will be interrelated, scalable and logically consistent. The approach aims at transparency and automation to-the-extent-possible, supported by manual corrections where needed to increase quality and meet user requirements. This will ensure efficient and reproducible data processing and facilitate further updates of EU-Hydro into the future. The pan-European production is targeted to be finalized by summer 2026. The upgraded EU-Hydro 2.0 suite of products will constitute a harmonized, homogeneous and consistent reference dataset for Europe. It will provide a new era not only for water mapping in the framework of CLMS, further Copernicus Services and other fields, but also target hydrological modelling, hydrologic risk assessments, climate change studies, water resource management, environmental protection strategies, and infrastructure planning, hence supporting the implementation of the EU Biodiversity Strategy, in particular the Nature Restoration Law. Its potentially crucial role for society is further emphasised by the rising frequency of natural disasters, such as droughts and floods, the latter being one of the use cases for modelling. With its high-quality, free, and openly accessible data, EU-Hydro 2.0 will help address these challenges, which urgently call for enhanced water resilience in Europe.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Session: A.08.12 Advances and applications of sea surface temperature and the Group for High Resolution Sea Surface Temperature

Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. SST measurements have been performed operationally from satellites since the early 1980s and benefit a wide spectrum of applications, including ocean, weather, climate and seasonal monitoring/forecasting, military defense operations, validation of atmospheric models, sea turtle tracking, evaluation of coral bleaching, tourism, and commercial fisheries management. The international science and operational activities are coordinated within the Group for High Resolution Sea Surface Temperature (GHRSST) and the CEOS SST Virtual Constellation (CEOS SST-VC) in provision of daily global SST maps for operational systems, climate modeling, and scientific research. GHRSST promotes the development of new products and the application of satellites for monitoring SST by enabling SST data producers, users and scientists to collaborate within an agreed framework of best practices.

New satellites with a surface temperature observing capacity are currently being planned for launch and operations with ESA and EUMETSAT, such as CIMR, Sentinel-3C/D, and Sentinel-3 Next Generation Optical. In addition, new ultra-high-resolution missions are in planning such as TRISHNA and LSTM. These satellite missions continue contributions to the provision of high-quality SST observations and opens up opportunities for further applications. However, this will also require new developments and innovations within retrievals, validation etc. It is therefore important that the developments within high resolution SST products are presented and coordinated with the ongoing international SST activities. Research and development continue to tackle problems such as instrument calibration, algorithm development, diurnal variability, derivation of high-quality skin and depth temperature, relation with sea ice surface temperature (IST) in the Marginal ice zone, and in areas of specific interest such as the high latitudes and coastal areas.

This session is dedicated to the presentation of applications and advances within SST and IST observations from satellites, including the calibration and validation of existing L2, L3 and L4 SST products in GHRSST Data Specification (GDS) and preparation activities for future missions. We also invite submissions for investigations that look into the harmonization and combination of products from multi-mission satellites.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Global satellite-based sea and sea-ice surface temperatures since 1982

Authors: Pia Englyst, Ioanna Karagali, Ida L. Olsen, Jacob L. Høyer, Guisella Gacitúa, Alex Hayward
Affiliations: Danish Meteorological Institute
Sea surface temperature (SST) and sea-ice surface temperature (IST) are both essential climate variables (ECVs) and long-term stable observational records of these (and other ECVs) are crucial to monitor, characterize and understand the state of climate as well as its variability and changes. We present a 43-year climate data record (CDR, 1982-2024) of global combined sea and sea-ice surface temperature which has been produced from satellite observations (independent of in situ measurements) within the Copernicus Climate Change Service (C3S). Satellite observations from both infrared and microwave sensors have been blended using an optimal interpolation scheme to provide daily gap-free fields of combined SST and IST on a global 0.05 regular latitude-longitude grid. Efforts have been put into improving the surface temperature estimates over sea ice and the marginal ice zone, and to improve the uncertainties of the surface temperatures over both sea and sea ice. For consistency with existing L4 SST products, the global C3S SST/IST CDR also includes an estimate of the under-ice water temperature (UISST) in sea-ice covered regions, which is based on an improved methodology using the sea ice concentration and a monthly climatology of salinity. The derived surface temperatures have been validated against independent in situ observations from a wide range of sources including ships, drifting/moored buoys and Argo floats over open ocean, and flight campaigns, ice mass balance buoys and other drifting buoys/platforms for sea ice. The global CDR performs similarly to existing ESA Climate Change Initiative (CCI) SST datasets over open ocean, and similarly to an earlier (Arctic-only) version of this dataset produced within the Copernicus Marine Service (CMS) over sea ice. The combination of SST and IST provides a much better and more consistent indicator of climate change and surface temperature trends in the high latitudes, where the coverage of sea ice changes rapidly. The global combined sea and sea-ice surface temperature has risen with about 0.5°C over the period 1982-2024, which is ~25-30% more than observed in existing global L4 SST products considering the global ocean (using the under-ice SSTs) and the region between 60S and 60N. This highlights the importance of the combined sea and sea-ice surface temperature indicator for monitoring the actual surface temperature trends in high latitudes.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Have we been underestimating midlatitude air-sea interaction?

Authors: Cristina González Haro, Javier García-Serrano, Aina García-Espriu, Antonio Turiel
Affiliations: Institut Ciències Del Mar (ICM-CSIC), Institut Català per la recerca i governança del mar (ICATMAR), Group of Meteorology (METEO-UB), Universitat de Barcelona
Some traditional, climate-oriented sea surface temperature (SST) observational datasets do not generally include satellite data and are typically based on in-situ observations with a coarser spatial resolution (1 to 2 degrees), prominent examples being the Extended Reconstructed SST from NOAA (ERSST) and the Hadley Centre SST, version 3 (HadSST3). Other datasets combine both, in-situ and satellite observations, such as the Hadley Centre Sea Ice and Sea Surface Temperature dataset (HadISST). The main objective of this work is twofold. First, we globally characterize and compare SST climatology and variability at grid-point level, considering seasonal averages (DJF, MAM, JJA, SON), between two standard, climate-oriented datasets, HadISST (1° resolution) and ERSST v5 (2° resolution), with the GHRSST product developed by the European Space Agency Climate Change Initiative (CCI) (0.05° resolution). Secondly, we assess the impact of temporal and spatial resolution in such SST characterization as well as on air-sea interaction, estimated by correlating SST with turbulent heat flux (THF; latent plus sensible). The study spans over 1982-2016 (35 years) that corresponds to the record of the satellite product (CCI). Our results show that the coarser datasets (ERSST-HadISST) overall have a warmer mean-state, except in the more dynamically-active oceanic regions such as the western boundary currents where they yield a colder SST climatology. More interestingly, the high-resolution dataset (CCI) markedly displays larger SST variability in these dynamically-active oceanic regions, which is consistent along the seasonal cycle. Likewise, we also find higher correlations between SST and THF over the western boundary currents in CCI as compared to ERSST-HadISST, indicating a stronger ocean-atmosphere coupling. Our results suggest that the high temporal and spatial resolution provided by remote sensing is key to better resolve air-sea interaction.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Exceptional Global Sea Surface Warming Driven by Earth’s Energy Imbalance

Authors: Owen Embury, Christopher Merchant, Richard Allan
Affiliations: University Of Reading, National Centre for Earth Observation
Sea surface temperature (SST) is a fundamental parameter within the Earth’s climate system, making global mean SST (GMSST) a key diagnostic for analyzing current climate change. The change in GMSST is not steady but shows both multi-decadal changes in warming trends and year-to-year fluctuations reflecting chaotic internal variability, such as the El Niño Southern Oscillation (ENSO), and external forcing including solar, volcanic, and anthropogenic effects. During the record-breaking ocean surface temperatures of 2023 and 2024 the GMSST exceeded previous observed seasonal maxima for approximately 15 months with a maximum margin of 0.3 K. As with previous record-breaking periods (1997/98 and 2015/16) this was triggered by a strong El Niño episode. However, the degree of warming observed in the 2023/24 event cannot be explained by ENSO variability alone – the 2023/24 event was the weakest of the three El Niño episodes, but the strongest in terms of record-breaking GMSST amplitude and duration. We present an assessment of the last 40 years of GMSST based on the new SST climate data record from the European Space Agency Climate Change Initiative and a statistical model using known drivers of variability and change showing that the increase in GMSST is accelerating, and the long-term trend in SST cannot be assumed linear. The accelerating GMSST trend is physically linked to the growth in the Earth Energy Imbalance (EEI) allowing changes in GMSST to be predicted for future scenarios of EEI. These indicate that GMSST will continue to increase faster than expected from a linear extrapolation from the previous four decades. Even under a "mitigated" EEI scenario the GMSST is likely to increase by 0.6 K over the next two decades, compared to 0.26 K from the linear fit. Policy makers and wider society should be aware that the rate of global warming over recent decades is a poor guide to the faster change that is likely over the decades to come, underscoring the urgency of deep reductions in fossil-fuel burning.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Development of Retrieval Algorithms for Level-2 Sea Surface and Lake Surface Water Temperature for CIMR

Authors: Emy Alerskans, Jacob Høyer, Pia Englyst, Ida Lundtorp Olsen
Affiliations: Danish Meteorological Institute
Observations of sea surface temperature (SST) from passive microwave (PMW) sensors are important complements to traditional infrared (IR) observations. However, the resolution of the current microwave imagers are not enough to capture subscale to mesoscale variability. Furthermore, they suffer from coastal and sea ice contamination. The Copernicus Imaging Microwave Radiometer (CIMR) is currently being prepared by the European Space Agency (ESA) as a part of the Copernicus Expansion program for the European Union, with an expected launch in 2029. CIMR is designed to observe high-resolution and high-accuracy PMW measurements of a selected range of geophysical variables. SST is one of the key parameters for monitoring and understanding climate change. Temperature changes due to climate change is most pronounced in Polar regions, which is why it is essential to have accurate estimates of SST in these regions. SST is therefore one of the main parameters of the CIMR mission. Furthermore, CIMR will also observe PMW measurements of Lake Surface Water Temperature (LSWT). LSWT is an important indicator of lake hydrology and biogeochemistry and can be used as an indicator of how climate change affects lakes. Furthermore, variations in LSWT can impact the weather and climate of the surrounding areas. However, due to the resolution of the current microwave imagers, LSWT products have not been developed from PMW measurements before. The enhanced resolution of CIMR will therefore make it possible to produce an LSWT product for large lakes. Currently, the retrieval algorithms for CIMR Level-2 SST and LSWT are being developed. The CIMR SST retrieval algorithm is a 2-step statistically-based algorithm with so called localised algorithms, which makes it possible to take into account non-linear relationships between the brightness temperatures (TBs) and other variables, such as e.g. wind speed. The LSWT retrieval algorithm is based on the SST algorithm and is tuned toward lake properties, using a matchup dataset with reference lake temperatures, TBs and auxiliary data, such as numerical weather prediction (NWP) data. In the first phase, the retrieval algorithms are developed using AMSR2 TBs and are thereafter fine-tuned using simulated CIMR data. Validation is performed using both AMSR2 TBs from matchup datasets and simulated CIMR TBs, making use of two kinds of demonstration reference scenario scenes; - Artificial test scenes, consisting of typical brightness temperature for different surface types arranged in artificial patterns corresponding to real world scenarios, such as ocean-land and ocean-sea ice transitions; and - Realistic test scenes consisting of simulated CIMR brightness temperatures.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Using information from microwave imager radiances to improve the ocean analysis in a coupled atmosphere-ocean model

Authors: Tracy Scanlon, Niels Bormann, Alan Geer, Philip Browne, Tony McNally
Affiliations: ECMWF
Microwave imagers play a key role in Numerical Weather Prediction (NWP) systems, providing information about atmospheric humidity, temperature, cloud and precipitation as well as surface information such as skin temperature and sea ice. Recently, low frequency microwave channels (6.9 and 10.65 GHz) from AMSR2 and GMI have been included under the all-sky (clear and cloudy) route into the ECMWF NWP system with a view to exploiting their surface information content over open oceans. Knowledge of the ocean surface skin temperature is vital to the accurate use of satellite radiances in weather forecasting, and it helps improve the quality of forecasts for both the ocean and atmosphere. The RadSST method utilises skin temperature increments generated in the atmospheric 4D-Var using a sink variable approach. These increments are then passed to the ocean component of the coupled system to update the ocean state via NEMOVAR. The skin temperature increments generated in the atmospheric 4D-Var are shown to address the time delay between the input SST retrieval products used to describe the ocean in the uncoupled system and the time of the microwave imager observations, particularly in the region of tropical instability waves. When assimilated within the coupled system these increments are demonstrated to improve the fit of the ocean background to in-situ observations from ARGO floats. Building on the improvements seen in the coupled system, work is ongoing to further understand the relationship between the bulk (foundation) SST used as an input to the NWP system and the skin temperature seen by the microwave imagers. This is explored using a machine learning approach and it is hoped this will be able to provide information to allow the current skin temperature parameterisation to be updated so that it is more applicable to the microwave imager channels used. The framework to use MW radiances to inform the ocean analysis will also be expanded to other upcoming sensors, such as AMSR3 and the future CIMR instrument. The latter activities will be performed under the new Data Assimilation and Numerical Testing for Copernicus eXpansion missions (DANTEX) project.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.49/0.50)

Presentation: Monitoring the sea surface temperature from IASI for climate application

Authors: Virginie Capelle, Jean-Michel Hartmann, Cyril
Affiliations: Ecole Polytechnique/Laboratoire de Météorologie Dynamique/IPSL
The sea surface (skin) temperature (SST) is a key parameter in climate science, meteorology and oceanography. Being at the ocean- atmosphere interface, it plays a crucial role in the variability and regulation of climate and its knowledge is essential to understand heat, momentum and gases exchange processes between the ocean and the atmosphere. As such, it is recognized as one of the essential variables for which accurate and global measurements are needed for the understanding, monitoring and forecasting of climate evolution, as well as for numerical weather predictions. Within this framework, satellite remote sensing, by providing daily and global observations over long time series, offers good opportunities. In particular, the excellent calibration and stability of the IASI instrument and the planned long time series of observation provided by the suite of three satellites Metop A, B and C is fully consistent with the quality requirement. We analyze here 18 years of the SST time series retrieved from IASI on board the three Metop using a fully physically-based algorithm. This dataset is characterized by : i) a total independence from in-situ measurements or models. ii) a high accuracy assessed by a systematic comparison with in-situ depth-temperature measurements, with a mean difference lower than 0.05 K and a robust standard deviation of 0.25 K. iii) an excellent stability of the time series, with a trend of the bias compared to in-situ measurements lower than 0.05 K/decade over the 2007-2024 period; iv) a perfect consistency between the three generations of IASI on-board Metop- A, -B, and -C, where monthly comparisons over their overlapping period give a mean SST difference lower than 0.02 K and a standard deviation of 0.3 K. Altogether, these results satisfy the prerequisites required to consider an SST time series as a climate data record. This opens promising perspectives by demonstrating the possibility to provide an accurate, as well as stable, SST time-series from IASI over the planned 20 years of the Metops-suite, that will be followed by two more decades of the IASI-New Generation missions.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Session: A.05.07 Sea level change from global to coastal scales and causes

Sea level changes at global and regional scales have been routinely measured by high-precision satellite altimetry for more than three decades, leading to a broad variety of climate-related applications. Recently, reprocessed altimetry data in the world coastal zones have also provided novel information on decadal sea level variations close to the coast, complementing the existing tide gauge network. Since the early 2010s, the ESA Climate Change Initiative programme has played a major role in improving the altimetry-based sea level data sets at all spatial scales, while also supporting sea level related-cross ECVs (Essential Climate Variables) projects dedicated to assess the closure of the sea level budget at global and regional scales. Despite major progress, several knowledge gaps remain including for example:
• Why is the global sea level budget not closed since around 2017?
• Why is the regional sea level budget not closed in some oceanic regions?
• How can altimetry-based coastal sea level products be further improved?
• How can we enhance the spatial coverage of these products, which are currently limited to satellite tracks?
• To what extent do small-scale sea level processes impact sea level change in coastal areas?
• Can we provide realistic uncertainties on sea level products at all spatial scales?
• What is the exact timing of the emergence of anthropogenic forcing in observed sea level trends at regional and local scale?
In this session, we encourage submissions dedicated to improving multi-mission altimetry products and associated uncertainties, as well as assessing sea level budget closure at all spatio-temporal scales. Submissions providing new insights on processes acting on sea level at different spatial and temporal scales are also welcome. In addition to using altimetry data, other space-based and in-situ data, as well as modelling studies, are highly encouraged to submit to this session.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Why Are Interannual Sea Level Variations at the U.S. Northeast and Southeast Coasts Uncorrelated?

Authors: Dr. Ou Wang, Tong Lee, Dr. Thomas Frederikse, Dr. Rui Ponte, Dr. Ian Fenty, Dr. Ichiro Fukumori, Dr. Ben Hamlington
Affiliations: NASA Jet Propulsion Laboratory, University of California Los Angeles, Atmospheric and Environmental Research
The magnitude of interannual sea-level anomaly (SLA) along the East Coast of the United States (U.S.) can be comparable to that of global mean sea-level rise over a few decades. These interannual SLA variations contribute to more frequent nuisance floods that affect coastal communities. Altimetry measurements suggest that interannual sea level anomalies (SLAs) at the U.S. East Coast are highly correlated along the northeast or southeast sectors (separated by Cape Hatteras) but are uncorrelated between the two sectors. These features are reproduced by the Estimating the Circulation and Climate of the Ocean (ECCO) ocean state estimate that is constrained by altimetry data. Here we use ECCO state estimate and sensitivity analysis to pinpoint the atmospheric forcing type and forcing region that make interannual SLAs at the Northeast and Southeast U.S. Coasts correlated or uncorrelated. We find that nearshore winds north of Cape Hatteras cause interannual SLAs in the northeast and southeast sectors to co-vary because these winds cause fast propagation of coastally trapped waves down the U.S. East Coast with time scales of weeks. Offshore winds are the major factor causing uncorrelated interannual SLAs between the Northeast and Southeast U.S. Coasts because (1) offshore winds affect SLA in the southeast sector much more strongly than SLA in the northeast sector, and (2) open-ocean baroclinic Rossby waves generated by offshore winds take months to years to reach the U.S. East Coast. Overall speaking, buoyancy forcing is much less important than winds in causing interannual SLAs at the Northeast and Southeast U.S. Coasts although surface heat flux can induce marine heat wave that causes SLA as large as wind-generated SLA at the northeast sector. The insight of gained from our causal analysis provides information that is helpful for developing machine-learning based prediction models for interannual sea-level variation along the U.S. East Coast.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Reconciling Satellite-based Measurements of the Ice Sheets’ Contribution to Sea Level Rise – Update from the Ice Sheet Mass Balance Intercomparison Exercise (IMBIE)

Authors: Inès Otosaka, Andrew Shepherd
Affiliations: Centre For Polar Observation And Modelling
The Greenland and Antarctic Ice Sheets remain the most uncertain contributors to future sea level rise; they are projected to contribute between 0.08 to 0.59 m and between 0.02 to 0.56 m to global mean sea level by 2100 according to the IPCC Sixth Assessment Report (AR6), respectively. Producing an observational record of ice sheet mass changes is thus critical for constraining projections of future sea level rise. The Ice Sheet Mass Balance Inter-Comparison Exercise (IMBIE) led by ESA and NASA aims at reconciling estimates of ice sheet mass balance from satellite altimetry, gravimetry, and the mass budget method through community efforts. Building on the success of the three previous phases of IMBIE – during which satellite-based estimates of ice sheet mass balance were reconciled within their respective uncertainties and which showed a 6-fold increase in the rate of mass loss during the satellite era – IMBIE has now entered its fourth phase. The objectives of this new phase of IMBIE, supported by ESA CCI, are to (i) provide annual assessments of ice sheet mass balance, (ii) partition mass changes into dynamics and surface mass balance processes, (iii) produce regional assessments and (iv) examine the remaining biases between the three geodetic techniques, all in order to provide more robust and regular estimates of ice sheet mass balance and their contribution to global mean sea level rise. In this paper, we report on the recent progress of IMBIE-4. We present an updated time-series of mass changes of Greenland and Antarctica from the 1970s until the end of 2023. We examine the drivers of Greenland and Antarctica mass trends, showing that while ice dynamics remain the main driver of Antarctica’s mass loss, in Greenland, ice losses from reduced surface mass balance have exceeded ice dynamics losses for the first time during the last decade.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Extrapolation of the Satellite Altimeter Record to Understand Regional Variations in Future Sea Level Change

Authors: Robert Steven Nerem, Ashley Bellas-Manley, Benjamin Hamlington
Affiliations: University Of Colorado, Jet Propulsion Laboratory
We perform a quadratic extrapolation of sea level on a regional scale based on satellite altimeter observations spanning 1993-2022, including corrections for internal variability and a rigorous assessment of the uncertainties associated with serially correlated formal errors, GIA, and satellite altimeter measurement errors. The errors in these regional extrapolations are relatively narrow, and we show significant overlap with the regional projections from the most recent IPCC 6th Assessment Report. The extrapolations are completely data-driven, model-independent, and show the trajectory of sea level change over the last 30 years extrapolated into the future. These extrapolations suggest that sea level rise in 2050 relative to 2020 will be 24 ± 10 cm in the North Indian Ocean, 22 ± 5 cm in the mid-North Atlantic, 22 ± 4 cm in the North Pacific, 16 ± 4 cm South Atlantic, 12 ± 4 cm in the South Pacific, 11 ± 4 cm in the Tropical Pacific, and 10 ± 4 cm in the Antarctic Circumpolar Ocean. The regional results may differ from each other by more than 100% and differ significantly from the extrapolated global mean sea level rise of 17 ± 4 cm in most cases. The results highlight the importance of considering regional variations in estimates of future sea level and provide an additional line of evidence when considering the representativeness of the range of climate model projections in describing near-term sea level rise.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Level-2-Based Gridded GRACE Ocean Mass Change Estimates and Their Uncertainty Characterisation to Assess the Closure of the Sea Level Budget

Authors: Thorben Döhne, Martin Horwath, Marie Bouih
Affiliations: TU Dresden University of Technology, Magellium
The assessment of the global sea-level budget reveals the extent to which all significant causes of sea-level variability are identified and whether their combined effects correspond to observed total sea-level changes. It also helps validating the underlying measurement systems. The misclosure of the global sea level budget in recent years motivates the need for a deeper understanding of the individual measurement systems. Assessing the sea level budget on regional scales can also help to focus investigations on certain regions. Inversion methods of the sea level budget components can further help to identify time periods in which individual components differ from other components. A robust and reliable uncertainty characterisation is absolutely crucial for this assessment. GRACE derived mass changes provide a unique observation of the ocean-mass component but are afflicted by problems of signal leakage stemming from the limited spatial resolution and required filtering. Signal leakage is particularly pronounced for the ocean margins and at smaller scales. Mass changes derived for the global ocean traditionally counteract the leakage of land signals into the ocean by employing a buffer zone along the ocean margins which is excluded from the subsequent integration. For gridded mass changes which include the regions along the ocean margins more sophisticated analysis methods have been developed. Mascon solutions provide a framework for such gridded mass change solutions and have become convenient and popular for many users. However, design choices inherent to these traditionally Level-1-based solutions are difficult to assess, or to adapt, by users with regard to specific applications like regional ocean-mass changes. Mascon solutions based on Level-2 gravity field solutions allow more access to, and control of, design choices by a wider range of scientists. We derive such gridded mass changes based on GRACE Level-2 spherical harmonics by extending the method of tailored sensitivity kernels from regional mass changes to globally distributed mascons. During the analysis different design choices are implemented to realise a compromise between propagated GRACE Level-2 solution errors and leakage errors. We present the impact of two design choices on ocean mass change and signal leakage of the land-ocean margin: (a) the amendment of a-priori mascons patterns by their sea-level fingerprints and (b) the choice of signal variances and covariances. The resulting sensitivity kernels, which describe the weighting functions that are used to integrate the input data, allow for a direct interpretation of the mass integration step of individual mascons. We further use the resulting sensitivity kernels to assess time-dependent error variances and covariances of integrated ocean-mass changes. The considered temporal correlations range from uncorrelated monthly noise to fully correlated long-term trend errors and include following error sources: (a) noise propagated from the GRACE Level-2 solutions (b) errors propagated from low-degree harmonics (c) leakage errors (d) errors of geophysical corrections. We present our gridded mass change solutions, resulting global and regional ocean mass changes and our uncertainty assessment in the form of error variance-covariance matrices. We also highlight preliminary results of an assessment of the closure of the sea-level budget studies within ESA’s Sea Level Budget Closure CCI+ project.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Observed regional sea level trends in the tropical Pacific ocean over 2014-2023 : causes and associated mechanisms

Authors: William Llovel, Antoine Hochet
Affiliations: Lops/cnrs
High-precision satellite altimetry data have revolutionized our understanding of regional sea level changes from seasonal to decadal change. For the first time, satellite altimetry reveals large-scale spatial patterns in regional sea level trends. Some regions (e.g., the tropical Pacific ocean) have experienced a linear rise three times as large as the global mean sea level trend. Steric sea level change has been identified as one of the major contributors to the regional variability of sea level trends observed by satellite altimetry over the past decades. The temperature contribution to sea level (known as thermosteric sea level) has generally been found to be more important than the salinity effect (i.e., halosteric sea level). The salinity contribution to regional sea level trends has been less studied than the temperature contribution because the halosteric contribution to global mean sea level is close to zero and also because of the lack of historical salinity measurements. In this study, we investigate regional sea level trends inferred from satellite altimetry data and from Argo floats since 2005 to assess their temperature and salinity contributions. We focus our analysis on large scale halosteric sea level trends in the tropical oceans that we link to the surface atmospheric forcing. Over 2014-2023, we find a particularly large halosteric sea level decrease in the tropical Pacific ocean that is associated with a salinification in the upper 200 m depth. We find a local decrease in precipitation. We also highlight an increase in trade winds located in the central tropical Pacific ocean. We hypothesize that the positive sea surface salinity anomalies, responsible for the positive halosteric sea level trends are advected with a strengthening of the upper ocean circulation induced by an increase in surface wind stress.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall L3)

Presentation: Sea Level Rise from Altimetry and Aspects for Future Missions

Authors: Remko Scharroo, Estelle Obligis, Bojan Bojkov, Julia Figa, Alejandro Egido, Craig Donlon
Affiliations: EUMETSAT, ESA/ESTEC
The era of precise satellite altimetry is generally regarded to start with the launch of TOPEX/Poseidon in 1992. Since then, a continuous series of missions, Jason-1, -2, -3 and Sentinel-6 Michael Freilich have been monitoring global and regional mean sea level from what is called the "altimetry reference orbit" at 1336 km and with a 66º inclination. Many consecutive improvements in satellite instrumentation, satellite design, as well as to the precise orbit determination systems on-board contributed to an increasing accuracy and precision. By flying the successive missions in tandem with a separation of 30 seconds to 30 minutes, it was possible to cross-calibrate those missions to within a few millimeters or better, thus ensuring the long-term stability of the now 32 years record. Other external factors have also contributed to the continued success of the altimetric sea level record. The ever increasing precision and accuracy of the atmospheric and other geophysical modelling, and the availability and maintenance of a number of tide gauges against which any drift of the altimetric sea level measurements can become evident. But the most overlooked source of critical validation of the reference missions as well as contributors to the long-term record have been nine other missions that have operated during the same time from a much lower altitude (ERS-1, ERS-2, Envisat, GFO, CryoSat-2, SARAL/AltiKa, Sentinel-3A and -3B, and SWOT), generally in Low Earth high inclination sun-synchronous orbits, conditions that were widely thought to prohibit an accurate retrieval of global mean sea level. During the course of time, technologies, background models, and orbit determination have evolved. For example, on the reference orbit, Sentinel-6 makes the transition to High Resolution altimetry that is intrinsically more precise, while also providing continuity Low Resolution measurements on the reference orbit. Sentinel-3 also introduced global High Resolution altimetry on the polar orbit and deviating slightly from the previous polar orbits of the ERS/Envisat/SARAL heritage. On top of these measurement evolutions, there have also been changes in processing, transitioning from the traditional Low Resolution MLE4 retracking, to numerical retracking, and various ways of processing the High Resolution altimetry. That poses the following questions regarding error sources and their evaluation in the establishment of our altimetric sea level record: * How well can we currently determine sea level rise and its acceleration? * Is there a distinction between the reference and polar altimeters? * How do various measurement techniques and processing affect the sea level measurements? * How relevant is the selection of the orbit for the continuation of the sea level record? * What does this all mean to the design of the Next Generation altimeter missions? * How does this inform about the error budget still lurking in the sea level record? This presentation makes a statistical analysis of the sea level record that can now be established from various combinations of the 14 altimeters mentioned here. It highlights the most compelling results of the altimetric sea level rise measurements, summarises some of the essentials to their success and discusses the way forward to maintain this record for the next decades with the Sentinel-3 Next Generation Topography Mission and Sentinel-6 Next Generation.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Session: A.02.03 EO for Agriculture Under Pressure - PART 6

The human impact on the biosphere is steadily increasing. One of the main human activities contributing to this is agriculture. Agricultural crops, managed grasslands and livestock are all part of the biosphere and our understanding of their dynamics and their impacts on other parts of the biosphere, as well as on the wider environment and on the climate is insufficient.
On the other hand, today’s Agriculture is Under Pressure to produce more food in order to meet the needs of a growing population with changing diets– and this despite a changing climate with more extreme weather. It is required to make sustainable use of resources (e.g. water and soils) while reducing its carbon footprint and its negative impact on the environment, and result in accessible, affordable and healthy food.
Proposals are welcome from activities aiming at increasing our understanding of agriculture dynamics and at developing and implementing solutions to the above-mentioned challenges of agriculture, or supporting the implementation and monitoring of policies addressing these challenges. Studies on how these challenges can be addressed at local to global scales through cross site research and benchmarking studies, such as through the Joint Experiment for Crop Assessment and Monitoring (JECAM) are welcome.

The session will hence cover topics such as
- Impact on climate and environment:
- Crop stressors and climate adaptation
- Food security and Sustainable Agricultural Systems
- New technologies and infrastructure
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Quantifying the Impact of the 2022 Mega-Heatwave on Indian Wheat Yields Using Satellite Sun-Induced Chlorophyll Fluorescence and Environmental Data

Authors: Ben Mudge, Dr Harjinder Sembhi, Darren Ghent, Dr Dan Potts
Affiliations: School of Physics and Astronomy, University Of Leicester, National Centre for Earth Observation
Between 2010 and 2050, global food demand is predicted to increase between 35-56% in line with increasing global populations. Environmental pressures such as those caused by a changing climate and more frequent extreme heat events can threaten future food security. Heatwaves and water stress cause many negative plant responses such as decreased transpiration and photosynthetic inhibition leading to reduced crop yields. Satellite observations of solar induced fluorescence (SIF) are an effective way to monitor global vegetation changes as SIF provides information on plant photosynthetic efficiency, which can potentially help us better understand the timescales and intensity of heat stress on crops and act as an early warning system for plant stress. This project primarily focuses on understanding how agricultural water and heat stress manifests in SIF. Across India, 70% of rural households rely primarily on agriculture for their livelihoods. Many states in India adopt agriculturally intensive rice-wheat cropping systems where up to 80% of a state’s land is dedicated to growing rice in the summer and up to 70% growing wheat in the winter. During the 2022 mega-heatwave, India was forced to stop wheat exports in due to nation-wide crop yield losses. Heatwave conditions combined with torrential rains in 2023 resulted in a predicted ban extension until March 2025. By combining multiple coincident satellite observations, we explore the relationships between SIF, land surface temperature (LST), normalised differential vegetation index (NDVI), and vapour pressure deficit (VPD ), and soil moisture (SM), in baseline and extreme water stress conditions across agricultural regions of the country. A multivariate analysis of Sentinel 5p TROPOSIF, VIIRS LST, NDVI, SM, and ERA 5 derived VPD will be presented in the context of government wheat yield statistics data. Early results indicate that SIF is most strongly correlated with state crop yield information. Regional and time series analysis from 2018 to 2024 along with the results of statistical analysis will be used to demonstrate the timescales over which SIF and other parameters capture heat and water stress impacts and how these stresses can be better monitored and predicted.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Optimising Light Use Efficiency Models for Crop Productivity Estimation Under Heat Stress

Authors: Peiyu Lai, Dr. Michael Marshall, Dr. Roshanak Darvishzadeh, Dr. Andrew Nelson
Affiliations: Faculty of Geo-Information Science and Earth Observation, University of Twente
The increasing frequency and intensity of extreme heat events highlight the need for reliable global estimates of crop productivity under heat stress. Light use efficiency (LUE) models are increasingly used for macroscale crop yield due to their ease of parameterisation using satellite-driven vegetation indices and gridded meteorological data. However, their performances often suffer under heat stress, primarily due to limitations in model structure, parameters, and input data. Addressing these uncertainties is essential for enhancing model accuracy and ultimately improving food security assessments in a warming climate. This study evaluates the impacts of uncertainties from model structure, parameters, and input data on LUE models’ performances under heat stress. Firstly, based on eddy covariance flux tower data spanning 177 crop growth seasons across 18 globally distributed sites, LUE model structures (components representations) were assessed and optimised for heat stress periods characterised by high temperatures. We excluded confounding factors such as low soil moisture and unfavourable light conditions. Secondly, the optimised model was validated for key outputs—gross primary productivity (GPP), dry above-ground biomass (AGB), and crop yield—to quantify parameter driven uncertainties at the field level with 145 samples over 14 years. Finally, input data uncertainties were further analysed by comparing three remote sensing sources (MODIS, Landsat 8, and Sentinel-3) and three meteorological datasets (station data, ERA5, and LSA SAF EUMETSAT), focusing on differences in spatial and temporal resolution, data quality, and representativeness. Results show that incorporating the Enhanced Vegetation Index (EVI)-based Fraction of Photosynthetically Active Radiation (FPAR), the evaporative fraction (EF)-based moisture constraint, and an inverse double-exponential temperature function significantly improved GPP and AGB estimation under heat stress. The optimised model outperformed the three commonly used models — the Vegetation photosynthesis model (VPM), the eddy covariance–light use efficiency (EC-LUE) model, and the Carnegie–Ames–Stanford Approach (CASA) model —reducing RMSE by 34%, 39%, and 57% respectively and increasing R² by 9%, 8%, and 44%, respectively. These enhancements also improved GPP and AGB estimation under normal growth conditions. Analysing the parameter-driven uncertainties revealed that literature-based parameters for converting AGB to crop yield often underestimated crop yields. This was evident as the optimised model, while accurately estimating GPP and AGB, still underestimated crop yields. In contrast, EC-LUE, which overestimated GPP, provided more accurate yield estimates. This highlights the critical role of accurate estimation of parameters related to the dry matter allocation, which is often treated as an empirical, crop-specific constant across all conditions. The influence of heat stress on the harvest index should incorporated in future model refinements. This study provides critical insights into improving crop productivity estimation under heat stress and can inform large-scale adaptation strategies to mitigate the impacts of a warming climate.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Addressing soil stressors on rice crops through hyperspectral remote sensing: a comparison of EnMAP, PRISMA and Sentinel-2 missions

Authors: Francisco Canero, Dr. Victor Rodriguez-Galiano, Mr. Aaron Cardenas-Martinez, Daniel Arlanzon, Jose Manuel Ollega-Caro
Affiliations: Department of Physical Geography and Regional Geographic Analysis, Universidad de Sevilla
Rice (Oryza sativa L.) serves as a primary source of food for over half of the world's population. Anthropogenic climate change poses a threat for rice crops and food safety, increasing the risk of damage by abiotic stressors such as soil salinity, nitrogen or carbon deficit. Innovative advances in spaceborne hyperspectral technologies such as EnMAP and PRISMA scanners might improve the characterization, mapping and understanding of those phenomena compared with forerunner multispectral missions including Sentinel-2 or Landsat. Moreover, there is a growing demand on hyperspectral imaging for abiotic stress detection concerning the forthcoming ESA hyperspectral operational mission CHIME. This study aims at mapping three agricultural soil properties (soil salinity, total carbon, and available nitrogen) acting as soil stressors of rice crops in a 34477.51 ha agricultural area under a climatic Mediterranean setting. A second objective of this study is to evaluate differences of EnMAP and PRISMA hyperspectral missions compared to operational multispectral ESA mission Sentinel-2 for mapping these soil properties. The field campaign was carried on Bajo Guadalquivir under the ESA-funded EO4Cereal Stress project, a plain field located in the estuary of Guadalquivir River in southern Spain. Stakeholders have shown their concern for salinity impact on rice yield, considering the 4201 plots with an average size of 8.21 ha. One hundred samples were collected on May-June 2023, measuring the spectra under laboratory conditions. Bare soil images were acquired during a drought year (with no rice harvest) for EnMAP, PRISMA and Sentinel-2. Two spectral preprocessing methods to enhance specific absorption features were applied to the hyperspectral images, Continuum Removal and Multiplicative Scatter Correction. To address the high dimensionality of hyperspectral data together with the limited number of soil samples, a two-step dimensionality reduction workflow based on a recursive feature extraction and a PCA was built. This dimensionality method was tested on five modelling algorithms: Linear Regression, Partial Least Squares Regression, Random Forest, Support Vector Regression and a Multilayer Perceptron Neural Network. To detect key spectral bands for each soil stressor, a model-agnostic interpretation method based on feature importance by permutation was performed. Dimensionality reduction, hyperparameter tuning, and model performance were evaluated using R2 and RMSE. Uncertainty was assessed selecting the models with positive R2 and evaluating Z-score deviation within each pixel. Hyperspectral images from EnMAP and PRISMA provided more reliable mapping estimations of the soil stressors compared with Sentinel-2, while having similar results to those obtained with laboratory spectroscopy. EnMAP provided a better prediction for soil salinity, while PRISMA achieved more accurate soil carbon and nitrogen maps. The most important bands were found on spectral regions captured by Sentinel-2, pointing that an enhanced spectral resolution might be required to accurately assess soil stressors of rice. Within modelling algorithms, Partial Least Squares Regression obtained the highest accuracy overall: soil salinity using EnMAP-MSC data (R2 = 0.574, RMSE = 2.647 dSm-1), soil carbon using PRISMA data (R2 = 0.717, 0.259%) and soil available nitrogen using PRISMA (R2 = 0.88, RMSE = 1.35mg/kg). The best result according to R2 per variable and image were as follows. Soil salinity: Laboratory: 0.79, EnMAP: 0.57, PRISMA: 0.5, Sentinel-2 0.1. Soil carbon: Laboratory: 0.89, EnMAP 0.571, PRISMA: 0.717, Sentinel-2: 0.14. Soil available nitrogen: Laboratory: 0.69, EnMAP: 0.57, PRISMA: 0.88, Sentinel-2: 0.16. The most important variables for salinity were 645 and 1609 nm bands, for soil carbon 855, 2437 and 1706 nm, and for soil available nitrogen the key features were within the 611-628 nm range. In summary, these results suggest the importance of the hyperspectral information provided by EnMAP and PRISMA for soil mapping aimed at detecting abiotic stressors. These results underpinned the requirement of further development of the operational hyperspectral mission CHIME, to fulfil the needs of stakeholders and acting as potential inputs to delimit the importance of different stressors in rice crops.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Human and Environmental Causal Effects on Food Security in Africa

Authors: Jordi Cerdà-Bautista, Vasileios Sitokonstantinou, Homer Durand, Gherardo Varando, Dr Gustau Camps-Valls
Affiliations: Universitat De València
Understanding the complex interplay between human and environmental factors affecting food security is crucial for designing effective, context-sensitive interventions, especially in vulnerable African regions. This study utilizes cutting-edge causal machine learning (ML) methods to estimate the impacts of anthropogenic and environmental variables on a comprehensive food security index. By estimating Average Treatment Effects (ATE) and Conditional Average Treatment Effects (CATE), we provide detailed insights into the socio-economic and climatic drivers of food security and their relative contributions [Hernan, 2020]. Our analysis focuses on three regions with distinct socio-environmental dynamics and food security challenges: the Horn of Africa, the Sahel, and South Africa. Leveraging a newly developed dataset that integrates socio-economic indicators, such as food prices, level of conflicts, or internal displacements; and climate variables, like precipitation, evaporation, temperature, or vegetation indices; we investigate spatial heterogeneity in causal effects, identifying distinct regional variations. Additionally, we employ innovative techniques such as Granger PCA [Varando, 2021] to cluster areas with similar climatic responses to El Niño Southern Oscillation (ENSO) patterns. This approach enables us to capture heterogeneity in the causal effects of treatments on food security outcomes across regions with analogous climatic behavior. We perform ATE and CATE analyses across multiple regions and apply robustness tests to ensure the validity of the estimations. Our results highlight the spatial heterogeneity of treatment effects on food security, providing quantitative and spatially explicit evaluation. These findings offer nuanced insights into how diverse socio-environmental factors interact and influence food security in the selected areas of interest. This research advances the application of causal inference to complex socio-environmental systems, providing evidence-based knowledge for policy-making. By evaluating the spatial and contextual dependencies of food security drivers, our study emphasizes the importance of tailored strategies to address the multifaceted challenges facing Africa’s food systems. References Sitokonstantinou, Vasileios, et al. "Causal machine learning for sustainable agroecosystems." arXiv preprint arXiv:2408.13155 (2024). Varando, Gherardo, Miguel-Angel Fernández-Torres, and Gustau Camps-Valls. "Learning Granger causal feature representations." ICML 2021 Workshop on Tackling Climate Change with Machine Learning. 2021. Cerdà-Bautista, Jordi, et al. "Assessing the Causal Impact of Humanitarian Aid on Food Security." IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2024. Pearl J., “Causality: Models, reasoning, and inference,” Cambridge University Press, vol. 19, 2000. Hernán MA, Robins JM. “Causal Inference: What If”. Boca Raton: Chapman & Hall/CRC, 2020.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: CropSHIFT - Climate Change impact on crop growing patterns in Europe

Authors: Andreas Walli, Dr. Edurne Estévez
Affiliations: Geoville
The effects of climate change — rising temperatures, altered precipitation patterns, and an increased frequency of extreme weather events— present significant challenges for agriculture and have strong consequences for crop yields, food security, and rural livelihoods. These changes in agroclimatic conditions induce substantial shifts in crop phenology, crop suitability and risks, and yield potentials. The consequences are becoming increasingly evident, particularly in changing crop cultivation spatial patterns. For example, regions once considered suitable for cultivation may become less viable, while new areas may emerge as more favorable for agricultural production. Additionally, some crop types may no longer be suitable for particular regions but thrive in new ones, enhancing agricultural production. Given the scale and urgency of these changes, especially in Europe, it is critical to advance our understanding of how climate change influences agricultural systems. CropSHIFT identifies, quantifies, and visualizes shifts in crop growth and predicts heat and drought risk-induced yield reduction on a regional level in Europe for the latest climate prediction model calculations. This prediction service will combine EO data - Copernicus-based high-resolution crop type growing areas -, weather-related crop growing parameters obtained from the ARIS (Agricultural Risk Information System; Eitzinger J. et al. 2024), and climatic information (Climate DT developed by ECMWF). The resulting hybrid model is the first that combines the latest climate scenarios with unprecedented spatial resolution crop information and allows to predict potential shifts of the ideal potential growing regions in the upcoming decades more accurately and spatially explicit. It quantifies the changing growing conditions and climate-related risks for a selection of crop types. This service will not only be essential for addressing the immediate challenges faced by agriculture, developing adaptive strategies and mitigating risks but also for aligning with key Sustainable Development Goals (SDGs), including SDG 2 (Zero Hunger), SDG 12 (Responsible Consumption and Production), and SDG 15 (Life on Land). Moreover, it will enable strategic land-use decisions informed by local contexts and needs, supporting the sustainable and resilient management of agricultural resources at every level of implementation. It will be of great use to a wide range of stakeholders, including agricultural ministries (policy level), agricultural insurers, logistical operators, agri-food actors such as seed producers and distributors, federal authorities for water management or farmers themselves. The collaboration with the Agro Innovation Lab will contribute to connecting with these diverse stakeholders to better understand their needs and tailor the service accordingly.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F2)

Presentation: Earth Observation for Rice Stress: Evaluating EnMAP Hyperspectral Mission to Detect the Effects of Salinity and Nutrient Deficit in Crop Biophysical Traits

Authors: Dr. Victor Rodriguez-Galiano, Mr. Daniel Arlanzon-Quiroz, Ms. Ana Martin-Gonzalez, Mr. Aaron Cardenas-Martinez, Mr. Francisco Canero-Reinoso, Mr. Manuel Lobeto-Martin
Affiliations: Department of Physical Geography
Soil salinity, caused by natural factors and agricultural mismanagement (e.g., inadequate irrigation and drainage), and nutrient deficits significantly impair crop development. In the Guadalquivir marshes, rice fields are irrigated with water from the Guadalquivir River, which is influenced by tidal seawater infiltration, exacerbating salinity stress. Nitrogen (N) deficits further hinder growth, reducing photosynthetic efficiency and grain filling. This study evaluates the performance of hyperspectral (EnMAP) and multispectral (Sentinel-2) satellite missions in monitoring salinity and nitrogen deficit impacts on rice crops in the Guadalquivir marshlands (Southern Spain). Hyperspectral and multispectral imagery from summer 2023 were complemented by five field campaigns (July 24–September 22) across three fields representing optimal, suboptimal, and poor conditions. Nine Elementary Sampling Units (3×3 grids, 30×30 m) were sampled per field, including analyses of nitrogen (N), pigments (chlorophyll-a and chlorophyll-b [Chla, Chlb], carotenoids [CAR]), water content (leaf water content [LWC]), and canopy traits such as the Leaf Area Index (LAI). Crop traits were estimated using a hybrid approach combining PROSAIL-PRO radiative transfer models (RTMs), dimensionality reduction techniques, and active learning to optimize machine learning (ML) algorithms. Principal component analysis (PCA) was applied to hyperspectral imagery to reduce spectral redundancy. The best models achieved R² > 0.6, with Gaussian Processes excelling in carotenoids (CAR; R² = 0.934, normalized root mean square error [NRMSE] = 7.899) and leaf nitrogen content (LNC; R² = 0.916, NRMSE = 11.128). Other traits, such as LWC (R² = 0.901) and leaf chlorophyll content (LCC; R² = 0.866), also performed strongly, whereas canopy traits like canopy chlorophyll content (CCC; R² = 0.642) and canopy nitrogen content (CNC; R² = 0.69) showed moderate agreement, likely due to challenges in LAI estimation (R² = 0.71). A case-control study compared stressed and non-stressed zones, evaluating salinity and combined stress (salinity + N deficit). A two-tailed t-test revealed significant impacts on CAR (p = 1.20e-08), LCC (p = 3.50e-06), and LAI (p = 3.07e-13) under salinity stress, with stronger effects under combined stress (LCC: p = 1.65e-18, CCC: p = 1.91e-03). Sentinel-2 corroborated most trends but showed discrepancies in CNC under combined stress (p = 1.94e-10), highlighting EnMAP’s superior spectral resolution. These findings demonstrate the potential of hyperspectral sensors for sustainable agriculture, supporting future advancements with ESA’s upcoming CHIME mission.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Session: F.04.20 EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115) - PART 2.

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Insights into EUDR Implementation at BLE: Challenges of applied geodata-analyses for deforestation monitoring

Authors: Niklas Langner, Stefanie
Affiliations: Federal Office for Agriculture and Food Germany (BLE)
The Regulation on deforestation-free products (EUDR) (EU 2023/1115) establishes new requirements to mitigate deforestation and forest degradation associated with consumption of key commodities in the EU. The regulation requires operators and traders to submit a Due Diligence Statement (DDS) ensuring that products, which contain relevant commodities such as timber, rubber, soy, beef, palm oil, cocoa, and coffee, are deforestation-free prior to being placed on the EU market or exported. The DDS must include detailed geolocation data of the production area to facilitate traceability and ensure compliance with the regulation. The German Federal Office for Agriculture and Food (BLE) is the designated national competent authority (CNA) responsible for implementing and enforcing the EUDR in Germany. As part of this role, the BLE in collaboration with the Thünen Institute of Forestry and several partners, is developing a digital control process including a system for analysing geolocation data, aiming at a high level of automatization. This System for analyzing geolocation data requires the development of a robust monitoring framework capable of providing reliable risk assessment at multiple levels. The first level of analysis involves an automated comparison of forest cover products, such as the GFC2020 map. Additionally, this step integrates information from initiatives like the ESA Agro Commodities project to enhance its capabilities. By automatically analyzing geolocation data, this step focuses on identifying low risk cases and minimizing the overall volume of controls. The second level targets unclear and higher-risk cases through case-specific processing of Copernicus satellite imagery. To execute this processing, the Copernicus Open Data and Exploitation Platform – Germany (CODE-DE) is utilized, employing established algorithms for the classification of relevant commodities, as well as deforestation and degradation. Unclear and high-risk cases identified at this stage undergo further examination in a third processing step. This final stage involves detailed analysis and manual interpretation using very high-resolution (VHR) data, ensuring legal reliability where required. This presentation outlines the conceptual framework, methodological approaches, and challenges associated with the monitoring system. The digital federal infrastructure is highly complex and sets strict limitations in terms of data security, thus CODE-DE with its ISO certificate 27001 must be used. The platform offers a secure working environment for processing remote sensing data and access to data archive via a scalable processing environment. This is a mandatory element for data management in the federal environment due to EUDR requirements. However, the development and implementation is highly intense as challenges arise from compliance requirements, the high standards of federal information security (BSI), and EU legal regulations ensuring secure and lawful data storage. Additionally, the project must address issues related to the access and usability of digital forest maps and commodity-specific maps, and their practical applicability in the verification process. Of particular importance are the challenges related to the access, usability, and integration of digital forest cover maps and commodity-specific maps. These maps play a crucial role in the verification process, as their applicability and accuracy are essential for effective risk assessment and monitoring.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: High-Resolution Global Maps of Cocoa Farms Extent

Authors: Robert Masolele, Dr Johannes Reiche, Camilo Zamora, Dr. Diego Marcos, Dr. Liz Goldman, Katja Berger, Martin Herold
Affiliations: Wageningen University, Helmholtz GFZ German Research Centre for Geosciences, Remote Sensing and Geoinformatics Section, Inria, World Resources Institute
Cocoa cultivation serves as a cornerstone of many agricultural economies across the globe, supporting millions of livelihoods and contributing significantly to global cocoa production. However, accurately mapping cocoa farm locations remains a challenging endeavor due to the complex and heterogeneous nature of the landscapes where cocoa is cultivated. Traditional mapping techniques often fall short in capturing the intricate spatial patterns of cocoa farming amidst dense vegetation, varying land cover types, farming practices and growing stages (Masolele et at., 2024). Moreover, the current mapping efforts mainly focus on two major producing countries, Ivory Coast, and Ghana (Kalischek et al., 2023). Thus, little is known about the location of cocoa farms in other cocoa producing regions, posing a challenge to the sustainability and economic contributions of the cocoa crop. To address this challenge, we first present a benchmarking approach for mapping commodity crops worldwide. Here we compare different spectral, spatial, temporal and spatial-temporal methods for mapping commodity crops. The benchmarking is based on a variable combination of Sentinel-1 and Sentinel-2, locational and environmental variables (temperature and precipitation). We use a comprehensive list of reference data spanning 36 cocoa-producing countries to do this task. Higher accuracy (F1-score 87%) is obtained when using a model that employs spatial-temporal remote sensing images plus locational and environmental information, compared to other models without locational and environmental information. Secondly, for demonstration, we employ the developed deep learning methodologies to map the locations of cocoa farms across the Globe with an F1-Score of 88%. By leveraging the rich spatio-temporal information provided by Sentinel-1 and Sentinel-2 satellite data, complemented by location encodings, temperature and precipitation data, we have developed a robust and accurate cocoa mapping framework. The developed deep learning algorithm extracts meaningful features from multi-source satellite imagery and effectively identifies cocoa farming areas. The integration of Sentinel-1 and Sentinel-2 data offers a synergistic approach, combining radar and optical sensing capabilities to overcome the limitations of individual sensor modalities. Furthermore, incorporating location encodings into the modeling process enhances the contextual understanding of cocoa farm distributions within their geographical surroundings. Through this research effort, we provide the first high-resolution global cocoa map giving, valuable insights into cocoa farm locations, facilitating sustainable cocoa production practices, land management strategies, and conservation efforts across the pan-tropical forests, where cocoa farming occurs. The work aligns with recent European Union (EU) regulations to curb the EU market’s impact on global deforestation and provides valuable information for monitoring land use following deforestation, crucial for environmental initiatives and carbon neutrality goals (European Commission., 2022). Specifically, our product can support monitoring and compliance of the European Union (EU) Regulation on Deforestation-free Products (EUDR, No 2023/1115) by identifying the previous existing and current cocoa farm expansion after the cut-off date of December 31, 2020. Within the framework of the ESA funded WorldAgroCommodities project, this mapping approach is now being converted into an operational cloud-based service on the Copernicus Data Space Ecosystem, allowing easy access to these crucial tools for the National Competent Authorities in light of enforcing the EUDR regulation. Furthermore, our findings hold significant implications for cocoa farmers, agricultural policymakers, and environmental stakeholders, paving the way for informed decision-making and targeted interventions to support the resilience, sustainability and traceability of cocoa farming systems worldwide. Robert N. Masolele, Diego Marcos, Veronique De Sy, Itohan-Osa Abu, Jan Verbesselt, Johannes Reiche and Martin Herold (2024). Mapping the diversity of land uses following deforestation across Africa. Sci Rep 14, 1681. https://doi.org/10.1038/s41598-024-52138-9 Nikolai Kalischek, Nico Lang, Cécile Renier, Rodrigo Caye Daudt, Thomas Addoah, William Thompson, Wilma J. Blaser-Hart, Rachael Garrett, Konrad Schindler, and Jan D. Wegner (2023). ”Cocoa plantations are associated with deforestation in Côte d’Ivoire and Ghana”. Nat Food 4, 384–393. https://doi.org/10.1038/s43016-023-00751-8 European Commission (2022). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market as well as export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing Regulation (EU) No 995/2010. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:52021PC0706
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Mapping Global Forest Management Practices in support of EUDR

Authors: Myroslava Lesiv, Wanda De Keersmaecker, Luc Bertels, Dmitry Schepaschenko, Dr Linda See, Sarah Carter, Elizabeth Goldman, Elise Mazur, Ruben Van De Kerchove, Steffen Fritz
Affiliations: IIASA, VITO, World Resources Institute (WRI)
Interest in using Earth Observation (EO) data for forest monitoring to support policies and regulations, such as the European Union Regulation on Deforestation-free products (EUDR), has surged in recent years. While new global and regional forest maps have been released, their quality is variable, and information on forest types, or use of forest land is often not available. Within this context, we have been updating the global forest management layer for 2015 developed by Lesiv et al. (2021), and for the first time, we are utilizing both Sentinel-1 and Sentinel-2 data to create an updated global map for the year 2020 of forest management practices. Our approach involves not only incorporating new remote sensing data but also testing various classification models, such as CatBoost, to identify the optimal model for this global mapping effort. These models are evaluated using different data configurations, including Sentinel-2 alone, Sentinel-1 alone and a combination of Sentinel-1 and Sentinel-2, with further performance comparisons between global and regional models. Baseline information for the year 2020 on forest and forest types is essential in order to identify potential deforestation and degradation. To ensure compliance with the EUDR, we have refined the forest definitions to include specific management classes including: Naturally regenerating forests without management signs (including primary forests), Managed forests (e.g., logging or clear cuts), Planted forests (rotation >15 years), Woody plantations (rotation <15 years), Agroforestry and two new classes: Rubber plantations and Fruit tree plantations. We have also updated the 2015 training dataset to 2020 by revisiting areas where deforestation has occurred, adding the new classes, and collecting additional training data in regions with lower accuracy. Finally, we have integrated feedback from the initial map version to enhance training data quality. We aim at achieving 80% accuracy as a minimum per each class in the iterative improvement process. The results we will present hold significant value for the scientific community that is engaged in EO-based forest mapping and land-use assessment, as this marks the first global effort to map forest management practices using both Sentinel-1 and Sentinel-2 imagery combined. Additionally, our insights on reference data collection may offer valuable support for the EUDR’s due diligence processes. We will share the validation results, discuss avenues for improving the map quality, and remaining research gaps to produce next-generation products for t policy and decision-making.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Development of EO based forest and crop monitoring tools to support Competent National Authorities: the ESA World AgroCommodity Project.

Authors: Christophe Sannier
Affiliations: GAF
Tropical forests are an important habitat with multiple functions playing a major role as global carbon sinks which offer solutions to the on-going challenges in climate change mitigation. Several international Conventions and policy frameworks such as the United Nations Convention on Climate Change (UNFCCC) reducing emissions from deforestation and degradation (REDD+), the UN Convention on Biological Diversity (UNCBD) and the UN Sustainable Development Goals (SDG) 13 and 15 all address forest protection and management. As the move towards Zero Deforestation (ZD) and improved traceability has been on a voluntary basis, the progress towards deforestation free supply chains has been slow. The new EU regulation on deforestation free supply chains (EUDR) which came into force in June 2023 as part of the EU Green Deal aims to reduce EU’s contribution to GHG emissions from deforestation and forest degradation worldwide. The regulation requires companies to ensure that specific target commodities -soy, beef, palm oil, wood, cocoa, coffee, rubber- and derived products (leather, chocolate or furniture) are sourced from areas where no deforestation occurred after 31 December 2020. The implementation of EUDR planned for 31 December 2024 is likely to be postponed for 12 moth at the time of writing this abstract to leave more time for stakeholders to prepare themselves for its implementation. EUDR requires operators and traders to produce Due Diligence Statement (DDS). These DDS will be subject to inspections from Competent National authorities (CNAs) designated for each EU member states. The inspections are implemented through annual plans according to the origin of the products and the risk level itself based against three sets of criteria: i) rate of deforestation and forest degradation; ii) rate of expansion of agricultural land for relevant commodities; iii) production trends of relevant commodities and relevant products. The primary focus of the ESA AgroCommodities project is on the development of a pre-operational monitoring system to support the implementation of the EUDR by EU Member States to support the checks to be made as part of the DDS inspection process. This system will align with the requirements and needs of the Competent National Authorities (CNAs) responsible for monitoring the compliance of operators and traders. A comprehensive consultative process with CNAs was initiated at the project onset to gather detailed requirements and will continue throughout the duration of the project. The detailed objectives of the project are as follows: - Engage with a representative number of European Competent National Authorities (CNAs) who will provide the user requirements for the project and commit to the collaboration with the Consortium. - In a consultative manner with CNAs, identify potential test and demonstration site for the seven commodities- beef, cocoa, coffee, oil palm, soya, rubber, and wood - where deforestation could have occurred after December 31 2020 and which will form the basis for the mapping work in the project. - Map and validate the seven commodities in different geographic regions (at least in 4 different countries), identify location where deforestation has occurred after December 2020, using EO-based (Copernicus data) methods and open source tools. - Conduct a knowledge transfer to the CNAs on the methods and open source solutions developed. - Undertake promotion and outreach of the methods and project outcomes with a broader audience than the CNAs; this will include a project website, webinars, the presentation of policy briefs and scientific publications The user requirement phase identified two main steps of the inspection process on which to focus: i) a fully automated tool to sieve through DDS to identify those requiring more detailed inspection through the identification of potential non-compliance (e.g. deforestation post 2020 and / or inconsistencies with declared commodity); ii) inspection level work to support the identification of non-conformance. Initial tests will be carried out on representative test sites across the world based on priority countries identified from CNAs with a set of criteria aimed at ensuring a representative sample of test sites across different production systems and regions. An objective and structured approach was adopted for the selection of test sites using the H3 level 5 hexagonal grid (representing 153km² at the equator) as an analytical framework to integrate available datasets representing each of the selected criteria combining deforestation risk, commodity presence and production systems with in situ data availability. At least 10 sites of a minimum area of 100km² (identified from H3 grid cells) will be selected and several methods to identify deforested areas and commodity types will be tested through a benchmarking approach. The preliminary design of the system will be based on a dynamic mapping approach in which the CNA inspector will be able to run a ML/DL model on the fly to identify deforested and commodity areas. The system will adopt a cloud based platform agnostic architecture to allow the integration within the CNA's own system. Preliminary results from the benchmarking process will be presented as well as the selected architecture for the prototype system. The next steps will be to implemented the selected approach over larger geographical areas covering at least 4 countries for each commodities, validate the results and develop a series of use cases in collaboration with the CNAs.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Global forest maps for year 2020 in support to the EU deforestation-free regulation: Improvements and accuracy

Authors: Rene Colditz, Clement Bourgoin, Astrid Verhegghen, Lucas Degreve, Iban Ameztoy, Silvia Carboni, Frederic Achard
Affiliations: Joint Research Center, European Commission, ARHS Developments Italia, European Dynamics Luxembourg
The EU regulation on deforestation-free supply chains (EU, 2023) prohibits to place or make available on the market or export certain commodities and relevant products if they are not deforestation-free, legally produced and covered by a due diligence statement. Due diligence by operators comprises the collection of information (including the geolocation of the sourcing area), risk assessment and risk mitigation measures to ensure that commodities and products do not origin from forest land use in December 2020. Member States’ competent authorities will check a certain percentage of due diligence statements. Even though geospatial data on forest presence or forest types is not required for the operation of the regulation, it may be a helpful source in various stages of the implementation. The JRC develops and maintains the EU observatory on deforestation and forest degradation, which provides access to global forest maps and spatial forest and forestry-related information and facilitates access to scientific information on supply chains. Building on a few existing global layers mostly derived from Earth Observation data including the WorldCover map 2020 (Zanaga et al., 2021), the map of global forest cover for the year 2020 (Bourgoin et al., 2024) indicates forest presence or absence, meeting the definition of forest as set out in the regulation. Operators could use this globally consistent, harmonized layer alone or in combination with other geospatial sources for risk assessment of deforestation (Verhegghen et al 2024), i.e. the conversion of forest into agricultural land for commodities and products of scope. Based on a first version released in December 2023, the JRC improved the map with new or updated input layers and user feedback and released a second version in December 2024 (JRC 2024). To support the risk assessment of areas subject to forest degradation, the JRC undertakes also work on mapping forest types that are in line with the definitions as set out in the regulation and by FAO (FAO, 2018). In November 2024, the JRC released a preliminary version of a global map of forest types for the year 2020 with three main classes (primary forests, naturally regenerating forests and planted forests). An accuracy assessment of the global forest cover map is an important but resource intensive part of the mapping exercise. The JRC interpreted more than 21,000 sample locations for forest presence or absence and several sub-categories, that is aimed to allow for a statistically robust assessment globally. In this presentation we will inform the audience about the latest data and methodology updates and product accuracy. In addition, we will outline the next phases for the global forest cover and global forest type maps for the year 2020. We will link to cases where this map is used with other sources of information in the risk assessment phase to be conducted for commodities such as cattle, cocoa, palm oil and wood. - Bourgoin C et al, 2024, Mapping Global Forest Cover of the Year 2020 to Support the EU Regulation on Deforestation-free Supply Chains, Publications Office of the European Union, Luxembourg - EU, 2023. Regulation (EU) 2023/1115 of the European Parliament and of the Council of 31 May 2023 on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation. - FAO, 2018. Global Forest Resources Assessment 2020 - Terms and Definitions. Forest Resources Assessment Working Paper 188, Food and Agriculture Organization of the United Nations, Rome. - JRC, 2024. EU Observatory on Deforestation and Forest Degradation. https://forest-observatory.ec.europa.eu/forest/rmap - Verhegghen A et al., 2024 Use of national versus global land use maps to assess deforestation risk in the context of the EU Regulation on Deforestation-free products: case study from Côte d`Ivoire, Publications Office of the European Union, Luxembourg - Zanaga D et al, 2021. ESA WorldCover 10 m 2020 v100. https://doi.org/10.5281/zenodo.5571936
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.61/1.62)

Presentation: Monitoring commodity-related deforestation and carbon emissions in Colombia

Authors: Camilo Zamora, Robert Masolele, Katja Berger, Johannes Reiche, Martin Herold, Louis
Affiliations: GFZ - Helmholtz-Zentrum Potsdam - Deutsches GeoForschungsZentrum
Deforestation and subsequent land-use changes, particularly for agricultural production, are significant contributors to global greenhouse gas (GHG) emissions, exacerbating global warming and climate change. Particularly in the tropics, the expansion of commodity crops such as soy, palm oil, rubber, cocoa, coffee, among others, has been a primary driver of deforestation and associated carbon emissions. The European Union (EU) Deforestation-Free Regulation (Regulation EU-2023/1115 on deforestation-free products, hereafter ‘EUDR’) aims to reduce the EU’s contribution to global deforestation and biodiversity loss by restricting the entry and commercialization into the EU market of commodities linked to deforestation and forest degradation. Understanding the environmental impact of these commodities on deforestation is crucial for developing effective regulatory frameworks, and support current efforts to mitigate the effect of food production on deforestation-related emissions. Disaggregated measurements of GHG emissions provide more accurate estimations of the climate impact of specific agricultural commodities, enabling targeted interventions and the evaluation of policies aimed at reducing emissions, such as the EUDR. This research aims to quantify the spatial and temporal dynamics of commodity crop expansion and estimate the associated carbon emissions and removals from changes in land use in Colombia. Our methodological approach integrates a comprehensive reference dataset of crop types with a diverse array of remote sensing data (Landsat, Sentinel 1-2) and environmental variables, to train state-of-the-art machine learning algorithms to classify land use types, particularly commodity crops across diverse geographic regions. Then, we integrate these results with ancillary data, such as the European Space Agency's Climate Change Initiative (ESA-CCI) and the Global Forest Watch (GFW), to estimate carbon emissions associated with post-deforestation land-use changes. Our analysis reveals a significant variation in carbon loss among different crop types and subregions in Colombia, with pasture, maize, and palm oil being the main drivers of carbon loss compared to other crops like cacao and coffee. The Amazon subregion shows the highest carbon loss, highlighting the importance of enhancing sustainable land management practices in this threatened ecosystem. Our study demonstrates that disaggregated emission estimations associated with different crop types and land use changes could contribute to the refinement of national emission inventories of GHG emissions. Expanding this study to include regions with fragile or endangered ecosystems, particularly in other tropical areas vulnerable to deforestation due to land conversion for agricultural commodities, could facilitate effective policy implementation to reduce deforestation-related emissions, and align with global climate goals.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Session: B.01.02 Earth Observation accelerating Impact in International Development Assistance and Finance - PART 2

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.

Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: GDA Analytics & Processing Platform: supporting Agile EO Information Development activities

Authors: Simone Mantovani, Alessia Cattozzo, Mirko Sassi, Fabio Govoni, Hanna Koloszyc, Judith Hernandez, Carlos Doménech García, Patrick Griffiths
Affiliations: MEEO, GeoVille, earthpulse, GMV, ESA-ESRIN
The Analytics and Processing Platform (APP) is a user-oriented analytical environment developed under European Space Agency’s Global Development Assistance (GDA) programme. In adherence to FAIR and open science principles, the Platform provides ten open-source, scalable and generic analytical EO capabilities. Additional capabilities will be integrated in the future through the Earth Observation Training Data Lab, GDA Agile EO Information Development activities, and other application package providers. This expandable ecosystem, powered by European Space Agency's Network of Resources, embodies GDA's commitment to capacity building and collaborative development. The architecture of the APP ensures that users can interact with EO data regardless of their technical background. More specifically, the Platform offers intuitive widgets for quick capability execution, a webGIS for visualising and comparing outputs with many CDSE data and WMS layers, Jupyter notebooks for advanced analytical workflows, as well as a Swagger page for direct APIs consumption. Ongoing stakeholder engagement has already revealed promising application scenarios, including infrastructure damage assessment (in Sudan) and desertification/revegetation efforts monitoring (in Syria). By continuously exploring stakeholders’ information needs and working practices, the APP strives to advance GDA’s mission of mainstreaming EO in international development assistance and fostering equitable access to satellite-derived insights.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Accelerating the Impact of Earth Observation for Public Health in Support of International Development Assistance

Authors: Eirini Politi, Carsten Brockmann, Carlos Doménech, Juan Suárez, Guido Riembauer, Lennart Meine, Markus Eichhorn, Ali Ahmad, Guillaume Dubrasquet-Duval, Michel Bénet, Rachel Lowe, Bruno Moreira De Carvalho, Jolita Jancyte, Georgina Charnley, Pia Laue
Affiliations: Brockmann Consult GmbH, GMV, mundialis GmbH & Co. KG, Diginove, Barcelona Supercomputing Center
Increasing public health risks due to climate change and the sensitivity of infectious diseases to changing environmental factors are adding pressure to existing socioeconomic-related challenges that public health faces around the world. International Financial Institutions (IFIs), such as the World Bank and the Asian Development Bank, have introduced agendas that target these challenges. By strengthening health systems, reducing inaccessibility to health infrastructure, increasing disease preparedness and resilience to climate-induced health risks, improving nutrition and providing sustainable solutions to strengthen health infrastructure, governance and financing, IFIs provide aid to national health agencies and directly affected local communities. Pivotal to the work financed by IFIs is access to relevant data and synoptic information on health and their background environmental or socioeconomical triggers. Earth Observation (EO) has been recognised as an essential source of information that can complement national data and support countries in the monitoring of key indicators related to health risks, or factors of vulnerability. For example, EO is used in the surveillance, prevention, and control of infectious diseases through the development of early warning systems and risk maps for diseases like malaria and dengue, both of which can be accelerated by impacts of climate change. EO also helps assess the likelihood or severity of airborne and waterborne health hazards such as air pollution, wildfires, dust storms, and algae blooms. Indirectly, climate also affects food and water access through more severe extreme events like droughts, flooding, storms, strong wind, or sea level rise, the risk and impact of which on population health can be assessed by a combination of EO data and other information sources. EO applications also support assessments of health infrastructure accessibility and vulnerability, particularly during natural disasters or crises, and provide useful information on nutrition and food security, the lack of which increases public health risks. The Global Development Assistance (GDA) Agile EO Information Development (AID) Public Health thematic activity aims to provide suitable, tailored and robust EO services and developments to IFI client projects, enabling them to improve existing or add new context to their public health assessment and improvement initiatives. This talk will present the specific real-world case studies we have been developing in collaboration with the World Bank and Asian Development Bank and their client state beneficiaries, and how our impact-oriented approach helps to accelerate the use of EO in support of international development assistance. Even though the activity is still at early stages, we will discuss our plan to maximise uptake of our EO products and services at the end of the activity.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: GDA Forest Management - Contributing to International Conventions and Regulations

Authors: Fabian Enßle, Dr. Sharon Gomez, Christophe Sannier
Affiliations: GAF AG
The ESA Global Development Assistance (GDA) programme has the overall objective to fully capitalise the utility of Earth Observation (EO) in international development programmes. Building on the experiences and lessons learned within the precursor programme Earth Observation for Sustainable Development (EO4SD) the GDA aims for adoption of EO in International Financing Institutes (IFIs) global development initiatives and streamlining EO in future development efforts. The thematic cluster of the GDA - Forest Management (GDA Forest) which was initiated September 2024 and led by GAF AG with a Consortium of European Partners has the overall goals of 1) Demonstrating the value of mainstreaming Earth Observation (EO) based forest products and services in IFI programmes for improved forest management in Client States (CS); 2) Assisting IFIs and CS with understanding, acceptance and adoption of the EO technology, its costs and sustainability which will support the integration of the technology into IFI-funded initiatives and decision making in CS. The GDA Forest activity is jointly developing new EO based Information Developments (EOIDs) to support IFIs and counterparts to address existing challenges. Monitoring forests is crucial and Earth Observation (EO) from satellites represents a cost-effective solution, providing global, comprehensive, accurate, repeatable and timely information, and is invaluable in the planning, implementation and impact assessment of Forest Management activities at larger scales. Based on the jointly developed product portfolio during the EO4SD-Forest Monitoring cluster the GDA Forest consortium is further enhancing the service and product specifications to comply with IFI needs and steer the adoption of EO based solutions for forest monitoring. Along four main themes, which include Reducing Emission from Deforestation and forest Degradation (REDD+), Forest Landscape Assessment and Planning, Mangrove & Protected Areas and Zero Deforestation (ZD). These themes are supported by different EO based products, which are further improved and aligned to User Needs within selected Use Cases of GDA Forest. These products include Forest Cover and Forest Area Change assessment, Tree Cover Density (TCD) mapping, Land Use and Land Cover Change information, Near Real Time Tree Cover Disturbance detection as well as Mangrove Area and Change assessments. A high priority is given to the use of Copernicus Sentinel satellite data, which is open accessible and at the same time providing a temporal and spatial resolution to address identified information requirements. Such data are used to enhance the efficiency and effectiveness of forest inventories (including mangroves) and GDA Forest will providing general forest resource and use information (data, map products, etc.) for sustainable forest management, planning, harvesting, etc. In the domain of Landscape Spatial Planning and Sustainable Management products support natural capital accounting, spatial planning, and land-use modelling approaches, and forest governance, and provide measures on the performance and effectiveness of the related initiatives. The use and integration of EO products into REDD+ workflows helps to track and verify the impacts of the forest sector and to ensure that forest and non-forest emissions are not underestimated or omitted in the forest sector layer. GDA Forest products can help to enhance the overall accuracy of deforestation estimations for the elaboration of Forest Reference Emissions levels (FREL) submissions – and could contribute to the implementation of innovative digital Measurement, Reporting and Verification (MRV) systems. The demonstration of EO based early warning mechanisms through near real time monitoring of forest cover disturbance using Sentinel-1 radar data supports the analysis of potential drivers of deforestation including identification of the expansion of agricultural land, growth of urban areas or development of illegal artisanal small-scale mining (ASM) activities. The NRT information is as well of importance for ensuring specific commodity value chains (e.g. cocoa, coffee, palm oil, wood) are free from deforestation to support countries in the implementation of policies related to the new Regulation (EU) 2023/1115 on deforestation-free products (EUDR). A first set of user engagement activities have been initiated with the World Bank and there is evidence for high interest in the GDA Forest portfolio as a potential support to the newly launched Global Challenges Programme for ‘Forests for Development, Climate and Biodiversity’ which will focus on three main regions in Africa, S. America and S. E. Asia. Additionally specific projects have been put forward by Bank Experts for collaboration. The paper will present the final selected Use Cases of the GDA Forest activity and the selected EO applications. The Use of EO in IFI programmes and its potential for market uptake will be discussed along the Use Case demonstrations.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Asset-Level Climate Risk Analysis of Energy Infrastructure Using Smart Tracing and Satellite Imagery

Authors: Anders Pedersen, Mr. Parth Khare, Ms. Clara Ivanescu, Mr. Laurens Hagendoorn, Ms. Elke Krätzschmar
Affiliations: ESMAP - World Bank, World Bank, NEO BV., IABG
The World Bank's Energy Sector Management Assistance Program (ESMAP) in partnership with ESA's Global Development Assistance (GDA) program, has developed a transformative Earth Observation methodology that addresses critical infrastructure mapping challenges in resource-constrained environments. The Smart Tracing Energy Asset Mapping (STEAM) methodology demonstrates how innovative Earth Observation applications can deliver substantial cost efficiencies while maintaining high accuracy in power infrastructure mapping. ESMAP and ESA-GDA leveraged STEAM to deploy an asset-level method for assessing climate-related risks of floods, landslides, and high winds to energy infrastructure. The STEAM methodology is a pioneering solution for cost-effective, large-scale detection of transmission infrastructure, which uses satellite imagery and deep learning. Traditional mapping of transmission lines typically incurs prohibitive costs and resource demands, limiting applicability in low- and middle-income regions. STEAM addresses these challenges by leveraging a Tower Probability Map, a probabilistic model that selectively targets areas for high-resolution imagery acquisition, resulting in cost saving of up to 92% for pilots conducted in Bangladesh and the Dominican Republic. In Bangladesh, the World Bank and ESA-GDA team combined the STEAM framework to integrate geospatial data for energy infrastructure with climate risk models to identify hyper-localized vulnerabilities within a 50-meter buffer zone around energy assets. This approach overlays asset locations with environmental and hazard data, such as flood exposure and landslide susceptibility, enabling targeted climate resilience and disaster response planning. The analytical framework can improve grid resilience, hardening of current energy asset and O&M, and inform future site selection for energy infrastructure infrastructure development. The methodology integrates Earth observation data with open-source platforms like OpenStreetMap to deliver actionable insights for infrastructure monitoring and planning. STEAM's design focuses on efficient data use and scalability, reducing reliance on comprehensive ground surveys and enabling its adaptation to diverse geographies and data availability contexts. These attributes make STEAM an essential tool for addressing infrastructure gaps in resource-constrained settings, offering replicable processes for governments and development agencies. The STEAM methodology demonstrates significant potential for transforming the detection and mapping of transmission infrastructure in resource-constrained environments. Building upon these results, this novel approach addresses critical challenges in infrastructure mapping by integrating probability maps, deep learning, and strategic sampling. The methodology's success in reducing imagery acquisition needs while maintaining high accuracy underscores its value for utilities and policymakers, particularly in developing countries where comprehensive grid data is often lacking. When applied to pilot countries, STEAM demonstrated significant cost savings by reducing the required satellite imagery down to approximately 10 percent of the country. Post-processing and quality assurance steps ensured the accuracy and completeness of the final maps. Rigorous post-processing significantly enhanced results, with accuracy improvements ranging from 12 to 28 percentage points compared to the results before post-processing. Notably, the methodology significantly reduced image acquisition costs to about 10 percent of the country's entire area, by focusing the analysis on high-probability areas. Our cost-effective and efficient approach is particularly well-suited for countries lacking comprehensive geospatial data on their energy infrastructure. The innovative approach addresses critical challenges faced by utilities, policymakers, and communities in developing countries by providing accurate, cost-effective, and timely information on transmission assets. The high-precision mapping achieved through the STEAM methodology translates into tangible operational benefits for utilities. This level of detail enables utilities to make more informed decisions, potentially leading to significant improvements in grid efficiency and reliability. Ultimately, this technology has the potential to transform energy infrastructure management globally, leading to more resilient, efficient, and sustainable power systems. Granular, tower-level geospatial data is a game-changer for utility operations and asset management. By replacing outdated, approximate information with precise, up-to-date data, utilities can significantly enhance grid performance and efficiency. Optimized maintenance scheduling, efficient crew dispatch, and accelerated disaster response become possible through accurate tower location data. The precise location data of each transmission tower is crucial for efficient crew dispatch, both for routine maintenance and emergency repairs, particularly in challenging terrains. In the event of natural disasters or other grid disturbances, having accurate infrastructure locations can significantly reduce response times and improve service restoration, for example in mountainous terrains where a meter difference can have significant impact for topography and therefore also flooding risk (U.S. Department of Energy, 2024). Moreover, predictive maintenance strategies, enabled by correlating asset conditions with environmental factors, contribute to cost savings and improved grid reliability (Shayesteh et al., 2018). The data outputs can also support innovative asset management services such as drone-based inspections and can enable grid technologies for enhanced power flow optimization, real-time monitoring, and demand response (Mokhade et al., 2020). With this detailed asset-level data, utilities can achieve substantial improvements in operational efficiency, reducing costs while enhancing grid reliability and resilience against evolving challenges. The value of this methodology extends beyond day-to-day utility operations and could play a crucial role in climate resilience planning. As climate-related risks to energy infrastructure increase, accurate and regularly updated geospatial data becomes essential for identifying vulnerable sections of the grid based on terrain, vegetation, and climate projections. This information allows utilities and policymakers to develop targeted hardening strategies for at-risk infrastructure and improve disaster response planning (PLOS Climate, 2023). By providing a comprehensive and up-to-date view of transmission networks, our methodology supports more informed decision-making in climate adaptation strategies for the energy sector. In developing regions, where energy access remains a significant challenge, our methodology can support more efficient infrastructure planning and expansion efforts. A comprehensive mapping of existing infrastructure networks can help in planning for new lines, considering factors such as terrain, existing settlements, and environmental sensitivities (Gorsevski, P. V. et al., 2013). This can be particularly useful in the context of lower and middle-income countries, where reliable georeferenced data on energy infrastructure is often incomplete or missing entirely. Moreover, the ability to detect transmission infrastructure remotely can be critical, for example in areas affected by conflict or natural disasters and where on-the-ground georeferencing is not possible (Xu et al., 2024). The methodology can provide baselines for rapid damage assessments and help prioritize grid rehabilitation efforts, thereby enhancing the resilience of energy systems to external shocks. Governments can significantly benefit from the availability of accurate and accessible geospatial data on transmission infrastructure. By leveraging this information, governments can optimize strategic planning, enhancing grid reliability and disaster response capabilities. Furthermore, this data can be instrumental in accelerating the transition to a clean energy future by facilitating the identification of suitable locations for renewable energy projects and assessing grid integration challenges (IRENA, 2023). Ultimately, the combination of effective planning, resource optimization, and a focus on clean energy contributes to robust economic development and improved quality of life for citizens. Multilateral development banks (MDBs) can together with ESA-GDA as geospatial partner leverage this data to identify investment opportunities, assess project feasibility, and monitor the performance of energy infrastructure projects. MDBs and ESA-GDA can therefore play a crucial role in enabling data-driven decision making for energy infrastructure investments in developing countries. By providing detailed information on the existing grid, MDBs can provide utilities and governments with the needed for investment decisions for grid expansions and new interconnections. Moreover, with access to precise geospatial data, MDBs can, in partnership with ESA-GDA, effectively monitor the performance of energy infrastructure projects, ensuring that investments deliver the expected outcomes and identify areas for improvement. Ultimately, this data-driven approach strengthens MDBs' capacity to support the development of resilient and efficient energy systems in their target countries. Making detailed power grid information accessible to the public serves multiple purposes, from enhancing community safety to fostering innovation. This democratization of data empowers individuals and communities to make informed decisions about land use and development, ensuring safe coexistence with power infrastructure (Broto & Kirshner, 2020). By enhancing transparency in the energy sector, this data supports informed policymaking, investment planning, and public engagement. Moreover, it accelerates academic research and innovation in energy systems, enabling comprehensive studies on grid expansion, vulnerability assessments, and renewable energy integration (Heylen et al., 2018). This open approach to energy infrastructure data creates a foundation for cross-border energy planning and coordinated disaster response efforts. The publication of the method as a public good can contribute to innovations in grid management and planning, and thereby a more sustainable and equitable energy future. The STEAM methodology contributes to the growing toolkit for energy system planning and management, with potential for global application. As the energy sector continues to evolve, facing challenges in sustainability, accessibility, and resilience, data-driven approaches like the one presented in this paper will play an increasingly important role. While further refinement and validation across diverse global contexts are necessary, this approach presents a significant step towards more informed decision-making in energy infrastructure planning and management worldwide. Future research could focus on expanding the application to a broader range of geographical contexts to validate its global robustness and expand its impact. As we look ahead ESA-GDA can therefore play a key role in expanding the use and impact of the methodology developed with ESMAP and thereby contribute to climate resilient of energy infrastructure globally.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Connecting people – EO as a driver for knowledge-based finance decisions for multiple infrastructure projects in Uganda

Authors: Kristin Fleischer, Peter Schauer, Mattia Marconcini, Elke Kraetzschmar
Affiliations: IABG, DLR
The North of Uganda is, like few other regions, impacted by a significant influx of refugees from neighbouring and conflicted countries, becoming part of the population on long term. Latest UN numbers of November 2024 count over 1.7M refugees hosted in the whole country. Within this setting, existing infrastructure serving the social and economic needs demands to be broadened. The World Bank is engaging in the region to extend and facilitate the safe access of the local communities to schools, markets, hospitals and other social services to develop a safe and viable livelihood and to foster economic growth. To make these investments sustainable and advantageous for most people possible is reflected by various Sustainable Development Goals, i.e. SDG3 Good Health and Well-being, SDG4 Quality Education, SDG5 Gender Equality, SDG11 Sustainable Cities and Communities. Sustainable investment requires comprehensive knowledge of the status-quo and dynamics in the focus region. The consideration of input data, in best case ranging from an up-to-date situation picture, prior developments within the region and the environmental pre-conditions, greatly influence the impact an investment can achieve. Economists are widely using statistical information during the project planning phase, often linked to administrative units and combine these with on-site investigations once the investment projects enter the preparation phase. Within the ESA GDA - Transport and Infrastructure project, collaborations between the GDA team and the WB counterparts started to emphasise the use of highly granular EO data and information, as well as raise awareness of the potential this data has on the decisions-making process. The IFI activities are focused on sustainable development for the local population, peacebuilding, as well as the integration of displaced persons into the communities, while addressing long-term needs for social stability and economic growth. The presented results support multiple World Bank teams and projects in Uganda, where decision making processes depend on reference data and statistics that overlap with reality to a limited extent. While national statistics represent the local population, UN-based statistics focus solely on monitoring refugee camps. The baseline for the analysis outlined here is DLR´s World Settlement Footprint (WSF) tracker derived by jointly exploiting Sentinel-1 and Sentinel-2 imagery, which systematically outlines the settlement extent growth with 10m spatial resolution at 6-month pace from July 2016 to (so far) July 2024. Whereas statistics keep local population and refugees separate, the inclusive WSF supports the understanding of the established refugee camps (location and extent) and their impact onto the region over time. One main objective of this engagement is to provide key figures related to the distribution and categories of schools and their adequate accessibility to the pupils (distance and time, safety). The team was fortunate to benefit from the latest Uganda Census 2024, conducted by the Uganda Bureau of Statistics, providing details on population structure. Linking the latter to the settlement extent estimates local demand and deficiencies, considering in-depth aspects responding to age structure and gender, as well as the potential population growth within the upcoming years. Another WB project that is addressed here emphasises on the expansion and extension of the road network to support local transport, and to lower barriers to exchange and connect within and between the cities and refugee camps. The geospatial analysis conducted, provides a valuable input for planning impactful investments. These may be related • to encouraging economical preconditions by supporting and simplifying public transport in order to emphasize mobility of people (labour force) and goods, or • to site selection best suitable for new schools respectively the extension of existing schools, on positively influencing public transport (extension and densification as a commuter medium), targeted investments for a safer transport infrastructure (e.g. traffic light or speed reduction along pupils’ commuter lines) and preventing in parallel an extension of the road to prevent more dangerous routes to school. The examples shown include and combine open information layer considering their advantages and limitations, conjoin these with most recent EO data and geospatial analytics, to retrieve highly transferable solutions. Although approaches are kept generic in a first step, their ability to be consecutively tailored to various specifics is inevitable. Tailoring hereby ranges from (a) the thematic addressed (e.g. schools, hospitals, social services, markets, or other commercial centres), to (b) the response to the local dynamics (urban growth characteristics), and to (c) the enrichment by additional information (such as data collected via citizen science, e.g. counting bus customers, road conditions, etc). The engagement process with the bank teams as an agile development approach enables both sides to tackle new ideas and options within the development process. It allows the service provider to better comprehend the challenges of the bank teams & local stakeholders, and to respond appropriately to their needs (different service- combination, scale and information depth), keeping in mind the overarching aspiration to achieve transferable, and scalable results.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.85/1.86)

Presentation: Operationalizing the Use of Earth Observation Data for Agricultural Statistics: The Case of Acreage Estimates in Pakistan

Authors: Boris Norgaard, Sophie Bontemps, Pierre Houdmont, Olivier Durand, Doctor Babur Wasim, Pierre Defourny
Affiliations: UClouvain-Geomatics, World Bank, Asian Development Bank
Pakistan is the fifth most populous country in the world and is expected to experience significant population growth in the coming decades, posing serious challenges to food security. At the same time, climate change is increasing the frequency of extreme weather events, such as droughts and floods, which jeopardize food production. Agriculture thus plays a crucial role in the country's resilience and development while experiencing significant pressures. In addition to this food security issues, the Ministry of Agriculture has to face the additional challenge of transitioning to more sustainable practices, particularly in water management. The two major stapple crops in Pakistan are wheat and rice, cotton and sugarcane being the two major cash crops. The agricultural calendar comprises two main cropping seasons i.e., Kharif from April to October-November and Rabi from October-November to April. More than 82% of the cultivated land is irrigated, and 18% is rainfed, which emphasizes the importance to reach sustainable water use in the near future. Pakistan has significantly lower water availability compared to other countries, classifying it as “water-stressed” and approaching “water scarcity”. The “National Water Policy 2018” has identified this emerging water crisis and aims at providing an overall policy framework and guidelines for comprehensive policy action. In this context, the integration of technologies such as remote sensing into agricultural monitoring systems presents a significant opportunity for evidence-based decision-making. The ability to provide timely, accurate, and cost-efficient data on crop acreage can complement traditional survey methods, enabling better planning and resource allocation. Within the ESA Global Development Assistance (GDA) programme, a collaboration was initiated with the World Bank to demonstrate the usefulness of EO data to estimate winter wheat acreage during Rabi season in Sindh Province. This collaboration was then widened to the Asian Development Bank (ADB) in order to scale-up over four provinces (Punjab, Sindh, Balochistan and Khyber Pakhtunkhwa), focusing on the main crops of the summer season at provincial level. For both experiments, area sampling frames were designed to collect statistically sound data compatible with EO data, aiming at estimating acreage of main crops during both cropping seasons. During Rabi season, the survey was jointly conducted by the Sindh Crop Reporting Service and our team, ensuring capacity building on the field. In total, 2240 points were collected by two enumeration teams during a 16-days field mission. For the 2024 Kharif season field campaign, the survey was conducted in autonomy by the Sindh CRS team, supervised remotely by us. The data were collected as expected, in enough quantity and with good quality, showing that the lessons from the Rabi season were uptaken by the CRS staff. In the other provinces, the ADB staff was trained and then conducted the field campaign in coordination with local provincial staff. For this season, more than 21 000 points were collected by enumeration teams scattered across the 4 provinces. The ESA Sen4Stat toolbox was used to automate the EO data processing pipeline and generate seasonal crop type maps using state-of-the-art methods. The accuracy of the obtained maps was good, reaching an f1-score of 0.85 for the Rabi season wheat map, and between 0.85 and 0.93 for rice, cotton and sugarcane for the Kharif season. Obviously, the quality of the collected ground data contributed to the quality of these maps. Maps were then combined with the agricultural surveys through regression estimators to obtain acreage estimates. During the Rabi season in Sindh, the estimates obtained were of 445 000 Hectares of wheat, which is aligned with the official statistics provided by the CRS. For the summer crops, obtained acreage estimates were also reliable and of comparable order of magnitude to official sources. The added-value of EO data highlighted by these pilots were the increased reliability of the statistics (reduced estimation error when using EO data), the timeliness (estimates were available a few weeks after the end of the season) and the possibility to obtain these estimates by district and not only at the province-level. Capacity building for the uptake of EO technologies is currently ongoing Operationalizing the use of Sen4Stat – and of EO data in general – requires a step-by-step approach, starting with regional pilots and specific objectives in order to reach awareness, demonstrate and convince the governments and local stakeholders about the added-value of EO-based information. This step has been successfully achieved and we now need to expand both the size of the pilots and their complexity to make sure the proposed solution is fully operational and performs as expected. Capacity building will be fully part of this process to make sure that the skills for a proper use of the new technologies are established and integrated locally.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall F1)

Session: C.03.12 Sentinel-1 Mission: Advances in Remote Sensing After a Decade in Space

Sentinel-1 mission has reached a decade in space. Since its launch, Sentinel-1 has revolutionized SAR-based remote sensing, becoming a cornerstone in Earth observation with its unparalleled capabilities and global coverage.

The session will address the way Sentinel-1 has transformed our understanding of the Earth's surface dynamics and enabled groundbreaking applications across various domains. From land cover monitoring to mapping natural disasters, assessing agricultural practices, studying urban ground motion, evaluating forest resources, and exploring coastal and marine environments, Sentinel-1 has been instrumental in advancing our knowledge and addressing critical societal challenges.

The session will present cutting-edge research and innovative methodologies, showcasing the latest developments in geophysical retrieval techniques, data fusion with complementary sensors, and the integration of machine learning and artificial intelligence approaches for enhanced analysis and interpretation of Sentinel-1 data.

Moreover, this session will highlight the importance of international cooperation in leveraging Sentinel-1 data for global initiatives and fostering collaboration among diverse stakeholders. Through collaborative efforts, we can maximize the potential of Sentinel-1 and amplify its impact on environmental monitoring, disaster management, and sustainable development worldwide.

Presentations and speakers:


A decade of advancing Forest Disturbance Monitoring and Alerting with Sentinel-1: Progress and Future Directions


  • Johannes Reiche - WUR

Why Sentinel-1 has been a game changer for monitoring dynamic hydrological processes


  • Wolfgang Wagner - TUW

Sentinel-1 reveals climatic changes in the Arctic sea ice at unprecedented detail


  • Anton Korosov - NERSC

Sentinel-1 operational DInSAR services for monitoring surface displacements of the Italian volcanoes: 10 years of observations and data analysis


  • Riccardo Lanari - IREA / CNR

A Decade of Ice Sheet Monitoring Using Sentinel-1 SAR Data: Advancements and Opportunities


  • Thomas Nagler - Enveo

Fostering Tropical Cyclone research and applications with Synthetic


  • Alexis Mouche - Ifremer
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Session: D.02.11 Super-resolution in Earth Observation: The AI change of paradigm

The design of the Sentinel-2 sensor with spatial resolutions of 10m, 20m and 60m for different spectral bands in the context of the actual resources offered by the methods of deep learning was a key turning point for the field of super-resolution. The spatial resolution is a characteristic of the imaging sensor, i.e. the bandwidth of the transfer function, super-resolution meaning to enlarge the range of spatial frequencies and the bandwidth of the transfer function. In the classical approaches this was treated mainly in two cases: i) as an ill-posed inverse problem, the solutions being constrained by strong hypotheses, very seldom fulfilled in actual practical cases, ii) based on physical model as pansharpening, the design of optical sensors with half pixel shift in the array or in the case of SAR by wave number tessellation or using information from side lobes of multistatic SAR. In reality super-resolution is a much broader area, it may refer also to the wavelength bandwidth for multi- or hyper-spectral sensors, the radiometric resolution, the characterization of single pixel cameras based on compressive sensing, the 3D estimation in SAR tomography, or an enhanced “information” resolution (e.g., instead of counting trees in very high resolution to estimate trees density form a low resolution observation), or enhance resolution of ocean wind estimation from SAR observations.

With the advent of deep learning, super-resolution entered in a new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low-resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypotheses but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The session invites submissions for any type of EO data and will address these new challenges for the Copernicus and Earth Explorer or related sensors.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Learning Sentinel-2 Multi-Date Super-Resolution by Self-Supervision

Authors: Jérémy Anger, Anderson Nogueira Cotrim, Gabriele Facciolo
Affiliations: Kayrros, ENS Paris-Saclay, University of Campinas
Super-resolution (SR) is an important task in satellite imagery analysis, enhancing spatial resolution to recover finer details essential for applications like environmental monitoring, urban planning, and disaster response. While many state-of-the-art SR methods rely on cross-sensor datasets, this dependency introduces challenges such as radiometric and spectral inaccuracies, geometric distortions, and temporal mismatches. To address these issues, we previously proposed a self-supervised framework for single-frame SR on Sentinel-2 L1B imagery [1], leveraging overlapping regions between the Multi-Spectral Instrument (MSI) detectors to train an SR network without requiring ground-truth high-resolution data. Building on this foundation, we now present significant advancements that improve performance and broaden the applicability of our approach. First, we extend the framework to Sentinel-2 L1C and L2A imagery, increasing usability for real-world applications. During training, paired patches from overlapping regions of L1C and corresponding L1B imagery are used. As in [1], the L1B imagery is used to supervise the training, containing complementary aliased information with high radiometric accuracy despite minor geometric misalignments caused by the sub-second acquisition delay between detectors. Dense optical flow estimation is employed to correct these disparities, ensuring accurate alignment. Remarkably, the method generalizes well to L2A imagery, even though training is conducted on L1C inputs. Second, we incorporate the 20m spectral bands of Sentinel-2, previously excluded in [1]. These bands are upsampled and concatenated to the 10m input bands. The self-supervised training framework is adapted to include these additional inputs, achieving a restoration of both the 10m and the 20m bands to 5m/pixel resolution. Experiments demonstrate that the inclusion of 10m bands enhances the restoration quality of the 20m bands, leading to better overall performance. Finally, we extend our method to a multi-frame SR setting by adopting a state-of-the-art architecture. Using a permutation-invariant network, our model supports both single-image and multi-date scenarios, handling between 1 to 15 frames as input. Multi-frame inputs mitigate the inherent limitations of single-frame SR by leveraging complementary information across time, even in suboptimal acquisitions. We evaluate restoration quality, temporal stability, and robustness against scene changes, demonstrating the method's suitability for tasks like change detection and land monitoring. These improvements significantly advance the state of self-supervised super-resolution for Sentinel-2 imagery, providing superior accuracy, versatility, and resilience for a wide range of satellite imagery applications. References: [1] Nguyen, Ngoc Long, et al. "L1BSR: Exploiting detector overlap for self-supervised single-image super-resolution of Sentinel-2 L1b imagery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Toward Real-World Hyperspectral Image Super-Resolution

Authors: Paweł Kowaleczko, Maciej Ziaja, Daniel Kostrzewa, Michal Kawulok
Affiliations: KP Labs, Silesian University of Technology, Warsaw University of Technology
Hyperspectral images (HSIs) are a valuable source of information that has been found useful in a variety of remote sensing applications, including Earth surface classification, precision agriculture, monitoring of environment, and more. However, high spectral resolution of HSIs is achieved at a cost of decreased spatial resolution that is insufficient in many practical scenarios. Therefore, the problem of super-resolving HSIs aimed at increasing spatial resolution of the spectral bands is an actively explored field of remote sensing. This process can be performed either by relying solely on a hyperspectral cube, or by exploiting an auxiliary source of high-resolution (HR) information, as is done in pansharpening. In both cases, the state-of-the-art techniques are based on deep learning, and their reconstruction quality heavily depends on the available training data. An important limitation concerned with super-resolution (SR) of HSIs consists in using simulated data for training: low-resolution (LR) spectral bands are obtained by treating an original HSI (later considered as an HR reference) with Wald’s protocol which degrades the individual channels and decreases their spatial resolution. Although this process allows for generating the amounts of data that are sufficient for training deep models, the reported results are often overoptimistic and they cannot be reproduced for original (i.e., not downsampled) HSIs due to the domain gap between simulated and real-world datasets. This problem is also inherent in SR of single-channel or multispectral images, and an increasing number of such methods have already been trained with real-world datasets comprising LR and HR images acquired by sensors of different resolutions. While there are a few such benchmarks (e.g., a PROBA-V dataset published by European Space Agency or WorldStrat and MuS2 benchmarks that match Sentinel-2 images with HR references acquired with SPOT and WorldView-2 data), creating real-world datasets composed of LR and HR HSIs would be much more challenging and costly. In the research reported here, we focus on developing real-world HSI SR methods. At first, our efforts are concerned with task-oriented validation, in which we evaluate the super-resolved HSIs in specific use-cases. They include real-life applications that exploit various features of HSIs, thereby verifying whether SR allows for information gain in the spatial domain and whether the spectral properties are preserved. Furthermore, we demonstrate how the existing real-world datasets can be exploited for training deep networks that super-resolve HSIs – we use them alongside the simulated hyperspectral data, as well as we employ them to improve the simulation itself. Finally, we show that multi-image SR techniques trained from real-world datasets can be applied to panchromatic images in order to enhance the high-frequency details of the pansharpened spectral bands. In our study, we exploit HSIs acquired within the PRISMA mission and we report both quantitative and qualitative results that overall confirm the effectiveness of the proposed approaches.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Machine learning for population displacement assessment in northern Afghanistan.

Authors: Maximilien Houël
Affiliations: SISTEMA Gmbh
In crisis context, such as in Afghanistan from 2021 on, monitoring population migration is necessary to understand local (national, sub-national, cross-border) pressure. Such context makes on-site assessment difficult, but Earth Observation (EO) offers capabilities to follow the evolution of a territory over time, enabling near real time monitoring. Within the Copernicus program, the Sentinel-2 family provides continuous optical imagery with high spatial and temporal resolution (10m every maximum 5 days). Such data can’t lead to direct monitoring of population but gives information to geographic objects that can be used as proxy for this aim. Indeed Sentinel-2 allows identifying formal and informal settlements to be used as information about population settling or leaving an area. The proposed work focus on the border cities between Afghanistan, Tajikistan and Uzbekistan namely: Mazar, Kholm, Konduz and Khwahan in Afghanistan, Balkh and Khorog in Tajikistan and Termiz in Uzbekistan. Analysis has been performed in the year 2022, including 2020 and 2021 as reference for changes. The methodology foresees three main steps: - Sentinel-2 provides optical imagery at 10m, to improve the image analysis a resolution enhancement has been applied through a Super Resolution (SR) model. The model is trained on Sentinel-2 visible bands as input and with a mix dataset from Planetscope and Worldview-2 imagery as reference. The architecture corresponds to the state-of-the-art Enhanced Deep Super Resolution (EDSR), known to maintain as much as possible the overall structure of the input data. This part is leading to a new spatial resolution of Sentinel-2 at 3.3m with consistent spectral signature to be used as input to the data processing steps. - On top of the super resolution a UNet with ResNet blocks has been developed to perform a segmentation task and to focus especially on buildings. The reference corresponds to a mix of the several open datasets on building layers such as Open Street Maps, Microsoft building dataset or Google Open building. With the super-resolved Sentinel-2 images an automatic detection of the buildings is generated. This model provides buildings masks for all times over the area of interest and then can identify potential changes over time. - Finally, an object-based detection algorithm is used with the building layers to extract specifically the changes: new or removed buildings over time. The data analysis workflows allowed identifying new settlements in the border of the Tajikistan cities that are directly connected with Afghanistan cities, leading to the hypothesis of population movement from one country to the other during the analysed years. The city of Balkh in Tajikistan shows three main areas with changes were identified, two respectively in East and South of the city showing installation of shelters over agricultural fields. The last area is closer to the centre of the city with decrease of urban vegetation for new settlements, moreover an increase of greenhouses directly linked with an increase of agricultural activity over the area, linked with an increase of population to sustain. The city of Mazar in Afghanistan is showing increases of urbanization in both North and South of the city following road arrangements, moreover a block organization of urbanization can be spotted and filled through the years. Termiz in Uzbekistan shows a new neighbourhood with urbanization increase on the North-West, with new settlements and new concrete roads to link the neighbourhood with the city centre. The other areas investigated regions such as Kholm; Konduz, Khwahan and Khorog didn’t show major changes over time even though located close to the borders and on main roads links. The developed methodology provides a generic and automatic pipeline to increase flexibility and speed of analysis. The work can then be performed in every area of interest and gives materials for reports and maps for decision making purposes.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: A data fusion method for Sentinel 2 super-resolution via diffusion models learned using harmonized NAIP images

Authors: Muhammad Sarmad, Michael C. Kampffmeyer, Arnt-Børre Salberg
Affiliations: Norwegian Computing Center, UiT The Arctic University of Norway
The escalating demand for high-resolution Earth Observation (EO) data for various applications has significantly influenced advancements in image processing techniques. This study proposes a workflow to super-resolve the 12 spectral bands of Sentinel-2 Level-2A imagery to a ground sampling distance of 2.5m. The method leverages a hybrid approach, integrating advanced diffusion models with image fusion techniques. A critical component of the proposed methodology is super-resolution of Sentinel-2 RGB bands to generate a super-resolved Sentinel-2 RGB image, which subsequently serves in the image fusion pipeline that super-resolves the remaining spectral bands. The super-resolution algorithm is based on a diffusion model and is trained using the extensive National Agriculture Imagery Program (NAIP) dataset of aerial images, which is freely available. To make the super-resolution algorithm, trained on NAIP images, applicable to Sentinel-2 imagery, image harmonization and degradation were necessary to compensate for the inherent differences between NAIP and Sentinel-2 imagery. To address this challenge, we utilised a sophisticated degradation and harmonisation model that accurately simulates Sentinel-2 images from NAIP data, ensuring the harmonised NAIP images closely mimic the characteristics of Sentinel-2 observations post-resolution reduction. To investigate if learning the diffusion model using a large dataset of airborne images like NAIP provides better results than learning the model using a smaller satellite-based dataset like WorldStrat of high-resolution SPOT images, we performed a comparative analysis. The results demonstrate that models trained with the harmonised and correctly simulated datasets like NAIP significantly outperform those trained directly on SPOT images but also other existing super-resolution models available. This finding reveals that learning with more data can be beneficial if the data is properly harmonised and degraded to match the Sentinel-2 images. We performed a comprehensive evaluation using the recently established open-SR test methodology to validate the proposed model across multiple super-resolution metrics. This testing framework rigorously evaluates the super-resolution model based on metrics beyond traditional super-resolution metrics such as PSNR, SSIM, and LPIPS. Instead, the open-SR test evaluates the model based on metrics that measure its consistency, synthesis, and correctness. The proposed super-resolution model outperformed several current state-of-the-art models based on the comprehensive open-SR test framework. In addition, visual comparison further established the superior performance of our model in both urban and rural scenarios. An important component of the proposed model is the super-resolution of all 12 Sentinel-2 Level-2A bands, contrary to previous work, which has mainly focused on RGB band super-resolution. The proposed fusion pipeline successfully utilises the super-resolved image to obtain an enhanced 12-band Sentinel 2 image, similar to pansharpening techniques. We show qualitative and quantitative results on all 12 bands that demonstrate the seamless performance of the fusion method in super-resolution. This study not only showcases the potential of combining AI-driven super-resolution models with image fusion techniques in enhancing EO data resolution but also addresses the critical challenges posed by the diversity in data sources and the necessity for accurate generative models in training neural networks for super-resolution tasks.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Challenges in Sentinel-2 Single Image Super-Resolution for quantitative remote-sensing

Authors: Julien Michel, Ekaterina Kalinicheva, Jordi Inglada
Affiliations: CESBIO (Université de Toulouse, CNES, CNRS, INRAE, IRD, UT3)
Deep-learning based Single Image Super-Resolution (SISR) of Sentinel-2 images has received a lot of attention over the past decade, with target resolution ranging from 5 meter to 1 meter. Training and evaluating such deep-learning models rely either on simulated datasets from High (target) Resolution images, where Sentinel-2 images are simulated from High Resolution images from another sensor at target resolution, or so-called cross-sensor datasets, leveraging near-simultaneous acquisitions of Sentinel-2 images with images from another sensor at target resolution. Examples of such cross-sensor datasets include the Sen2Venµs dataset [1] and the Worldstrat dataset [2]. With both simulated and cross-sensor datasets however, inconsistencies between Sentinel-2 and the high resolution sensor, also referred to as a domain gap, can impair proper training and evaluation. For simulated datasets, this gap mostly occurs at inference time: the model trained with simulated Sentinel-2 data may react poorly to real Sentinel-2 data due to a misfit or incomplete simulation process. With cross-sensor datasets, this gap occurs during training stage: unwanted geometric and radiometric distortions that are inherent to the cross-sensor setting will be learned by the model during training, resulting in geometric distortion and loss of radiometric precision at inference time. Moreover, evaluating model performances using cross-sensor datasets is also affected by the domain gap, as the usual Image Quality metric may be affected by radiometric and geometric distortion. In the frame of the Horizon Europe EVOLAND project, which has a dedicated work package on the Super-Resolution of Sentinel-2 images, we have made several findings and contributions in order to solve the domain gap issues caused by cross-sensor datasets in SISR. Our main contributions are as follows. 1) We demonstrated that most Image Quality (IQ) metrics usually used for SISR evaluation are sensitive to radiometric and geometric distortions. For instance, Peak Signal to Noise Ratio (PSNR), which is one of the most widely used metric, can no longer properly rank images with different levels of blur with respect to a reference image if there is more than 1 high resolution pixel of registration error. Such metrics cannot be trusted for the evaluation of cross-sensor SISR. 2) We proposed a new set of spatial frequency domain metrics in order to measure the spatial resolution improvement. Those metrics are insensitive to radiometric and geometric distortions. 3) We proposed an auxiliary Optical flow UNet that can be used to control geometric distortion during training, but also to measure the amount of learnt geometric distortion during evaluation. 4) We propose a training and evaluation framework for cross-sensor SISR that can be used to prevent geometric and radiometric distortions to leak into the model during training, and to impair proper evaluation. 5) Through the use of a vanilla ESRGAN [4] on both Sen2Venµs and Worldstrat datasets, we demonstrated that, unless a proper training strategy such as the one we proposed above is used, geometric and radiometric distortions of cross-sensor datasets are indeed learnt by the models, which will distort the input Sentinel-2 images at inference time. These contributions are summarized in a journal paper [3] currently under review. Additionally, we also developed a model that super-resolves 10 Sentinel-2 spectral bands, including Red-Edge bands and SWIR bands, to 5 meter, using a simulated dataset derived from Sen2Venµs, which can be compared to cross-sensor models thanks to the proposed metrics. While it performs only a modest super-resolution factor, this model is to our best knowledge the only one to jointly process 10 Sentinel-2 bands, and shines in its radiometric faithfulness with respect to the input Sentinel-2 images. In order to facilitate the use of this model, we have published an open-source inference code [4] that allows to apply the model to full Sentinel-2 products. In this talk, we will present an overview of these findings, focusing on the lessons learned during the development of the SISR models in EVOLAND. In particular, we will focus on the challenges posed by domain gap in cross-sensor datasets and how they can be overcome for more faithful SISR models as well as more confidence and reliability in the comparison of SISR models in future researches. End users and developers of downstream applications will also learn more about the quality of SISR images and about our ready-to-use, publicly available model. [1] Michel, J., Vinasco-Salinas, J., Inglada, J., & Hagolle, O. (2023). Correction: Michel et al. SEN2VENµS, a Dataset for the Training of Sentinel-2 Super-Resolution Algorithms. Data 2022, 7, 96. Data, 8(3), 51. https://doi.org/10.3390/data8030051 [2] Cornebise, J., Oršolić, I., & Kalaitzis, F. (2022). Open high-resolution satellite imagery: The worldstrat dataset–with application to super-resolution. Advances in Neural Information Processing Systems, 35, 25979-25991. [3] Julien Michel, Ekaterina Kalinicheva, Jordi Inglada. Revisiting remote sensing cross-sensor Single Image Super-Resolution: the overlooked impact of geometric and radiometric distortion. 2024. ⟨hal-04723225⟩ (Submitted to IEEE TGRS) [4] Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., ... & Change Loy, C. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops (pp. 0-0). [5] https://github.com/Evoland-Land-Monitoring-Evolution/sentinel2_superresolution
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall G1)

Presentation: Trustworthy Super-Resolution of Sentinel-2 Products Using Latent Diffusion and Their Applicability to Building Delineation and Flood Detection

Authors: Simon Donike, Cesar Aybar, Enrique Portalés-Julià, Samuel Hollendonner, Luis Gómez-Chova, Dr. Freddie Kalaitzis
Affiliations: University Of Valencia, University of Oxford, TU Vienna
The accessibility of high temporal-resolution Sentinel-2 (S2) multispectral imagery contrasts starkly with the scarcity of high-resolution satellite data, which is often commercially restrictive. Bridging this gap through super-resolution techniques offers a transformative potential for various remote sensing applications, from environmental monitoring to urban planning and disaster management. This research introduces a novel approach employing latent diffusion models (LDMs) to enhance the spatial resolution of S2 imagery by a factor of four, achieving 2.5m resolution from the nominal 10m of the RGB-NIR bands. Included in the final product are pixel-wise confidence metrics, giving users the ability to judge the SR accuracy for their specific downstream tasks. In addition, an extension of this project uses the introduced high-frequency of the RGB-NIR bands to enhance the 20m bands of S2. Our method adapts latent diffusion techniques to the unique challenges of multispectral remote sensing data, which necessitates maintaining high spectral fidelity while introducing realistic textural details. Our approach exploits the generative capabilities of LDMs guided by an encoding and conditioning mechanism specifically designed for remote sensing imagery. This mechanism ensures spectral consistency by utilizing low-resolution images to condition the diffusion process, thereby aligning generated high-resolution details closely with the ground-truth data. The core of our model, LDSR-S2, is designed to process the additional complexity of multispectral data, including the visible and near-infrared bands, essential for accurate remote sensing analysis. To circumnavigate the computational demands of diffusion models, which traditionally limit their applicability, we implement a diffusion process in a compressed latent space. This adaptation not only drastically reduces inference times but also allows the handling of large-scale datasets effectively. A distinctive feature of our approach is the integration of uncertainty estimation in the super-resolution process. The stochastic nature of diffusion models allows us to sample the distribution of likely generations, leading to a higher sampling diversity in uncertain regions and therefore a lower certainty score. By generating pixel-level uncertainty maps, our model provides a quantifiable measure of confidence in the super-resolved images, which is critical for applications where decision-making depends on the reliability of the data. Empirical results demonstrate that our model achieves superior performance in both spectral and spatial fidelity compared to existing state-of-the-art methods. The LDSR-S2 not only outperforms in terms of visual quality but also in the robustness of the details added, as evidenced by comprehensive testing across varied landscapes and conditions of S2 data. To further validate the practical utility of the LDSR-S2 model, we explored its application in a building delineation task. We trained different segmentation models using the SEN2NAIP dataset and the Microsoft Buildings Dataset. Each model was trained on low-resolution (LR), high-resolution (HR), and super-resolved (SR) imagery to enable a fair comparison. As expected, the model trained on HR imagery exhibited the best performance due to the higher detail and clarity, which facilitates feature recognition and segmentation. Conversely, the LR model performed the least effectively, struggling with feature extraction due to the lower spatial resolution. The SR model demonstrated significantly better performance than the LR model, although inferior to the HR model. This improvement underscores the value of the super-resolved images, as the introduced high-frequency details evidently aid the segmentation model in learning and identifying building features more effectively than when using the original LR images. Not only is the general detection of buildings improved, but especially small buildings are detected in the SR imagery which are not detectable in the LR imagery, with the detection rate of small buildings of a size smaller than 4 pixels improved by over 10%. This result highlights that super-resolution can substantially enhance the performance of downstream remote sensing tasks by providing richer information and enabling more accurate analyses. To further validate the results, we apply the model to a natural disaster use case. In October 2024, a significant flooding event occurred in Valencia, captured by a S2 pass 2 days after the flood. The urgency of the situation necessitated rapid and accurate flood mapping to facilitate emergency response and damage assessment. Traditionally, such efforts would rely on very high-resolution (VHR) satellite acquisitions or aerial imagery, which are not only costly but also suffer from longer revisit times or a limited swath. Using our LDSR-S2 model, we were able to immediately super-resolve the available S2 imagery, effectively reducing the waiting time for high-resolution data, and apply flood mapping models on the SR product. The super-resolved imagery enabled more precise detection and delineation of flood extents, enhancing the accuracy of the flood detection models used. This use case exemplifies how super-resolution can play a critical role in time-sensitive environmental monitoring and disaster response, providing high-quality data swiftly when it is most needed.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Hall N1/N2)

Session: D.05.05 CDSE User Review Meeting - Becoming Part of the Copernicus Data Space Ecosystem: Opportunities, Collaboration, and Community Guidelines

This session offers a comprehensive guide for individuals, researchers, and businesses across both public and private sectors seeking to engage with the Copernicus Data Space Ecosystem. We’ll outline the opportunities for collaboration, the resources and tools available, and the ecosystem’s key participation rules and best practices. Additionally, this session will cover pitches of onboarded and future Ecosystem members, where attendees will learn how to leverage the open-access Copernicus data, to efficiently co-develop their applications and services, and how to build partnerships that contribute to this dynamic, user-driven ecosystem. Join us to discover how to offer your datasets, services and knowledge while adhering to ecosystem standards, as we grow an impactful Copernicus community together.

Presentations and speakers:


Joining the Ecosystem: A Comprehensive Overview


  • Jurry de la Mar and Uwe Marquard - T-Systems

Presentation by one of the Ecosystem Members


  • Sander Niemeijer – S&T

Interactive panel session


Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 1.14)

Session: F.04.32 Toward an Aquatic Carbon Roadmap as a key integrated contribution to the GST

Following the 2015 Paris Agreement, the Committee on Earth Observation Satellites (CEOS) released a Carbon Strategy to guide the coordination of satellite data efforts supporting the Global StockTake (GST) process.
In this context, significant effort has been undertaken in the past years to understand how Earth Observation data can best support the GST implementation, notably through the writing of a Greenhouse Gas (GHG) Roadmap in 2020 focusing on the provision of atmospheric GHG datasets to the GST process. The Agriculture, Forestry and Other Land Uses (AFOLU) Roadmap followed in 2021. Considering the key role of the Aquatic realm (open and coastal oceans, inland waters) in the global Carbon cycle, ESA, NASA and JAXA are now coordinating the writing of an Aquatic Carbon Roadmap whose objective is to provide a framework with a long-term vision (~ 15+ years) to support space agencies in coordinating and defining the science, observation and policy needs to improve our understanding of the role and changes of carbon in aquatic environments.
This insight session will spotlight the developing Aquatic Carbon Roadmap and bring together contributors from the other CEOS roadmaps to highlight synergies and interconnections across the three efforts towards an enhanced understanding of the Earth as a System within the framework of the global stocktake. It will offer an opportunity to meet, exchange ideas, put the roadmaps in context of other efforts, and advance the efforts of the Aquatic Carbon Roadmap.

Presentations and speakers:


Introduction and CEOS context


  • Marie-Helene Rio - ESA

Global StockTake


  • Ben Poulter - NASA
  • Rosa Roman - JRC

The Greenhouse Gas Roadmap


  • Yasjka Meijer - ESA

The AFOLU roadmap


  • Clement Albergel - ESA

The Aquatic Carbon Roadmap


  • Jamie Shutler - U. of Exeter

Panel discussion


  • Moderator: Laura Lorenzoni - NASA
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Session: A.02.02 Terrestrial and Freshwater Biodiversity - PART 3

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Integrating biodiversity cubes into Earth Observation

Authors: Quentin Groom, Lissa Breugelmans, Rocio Beatriz Cortes Lobos, Michele Di Musciano, Maarten Trekels, Duccio Rocchini
Affiliations: Meise Botanic Garden, Alma Mater Studiorum - University of Bologna, Department of Biological, Geological and Environmental Sciences, Department of Life, Health & Environmental Science, University of L'Aquila
Effective biodiversity management and policymaking require timely, accurate, and comprehensive data on the status, trends, and threats to biodiversity. This data must be delivered in actionable formats, incorporating measures of uncertainty and projections under various scenarios. Despite global policy initiatives such as the Kunming-Montreal Global Biodiversity Framework and IPBES assessments underscoring the urgent need for improved biodiversity monitoring, significant challenges remain in integrating biodiversity data into the broader environmental observation landscape. Biodiversity data originate from diverse sources, including citizen scientists, researchers, conservation organisations, and automated technologies such as sensors, eDNA, and satellite tracking. However, these datasets often lack standardisation, hindering interoperability with remote sensing and environmental data layers. The Essential Biodiversity Variables (EBV) framework offers a structured approach to transforming raw occurrence data into robust, policy-relevant indicators. We particularly focus on occupancy, that is the presence or absence of a taxon in a grid cell over a particular timeframe. Occupancy is included within the species populations EBV class. Although it is only weakly related to the population size of the taxon, it provides valuable information about its distribution. This distribution is strongly linked to the spatial patterns of the biotic and abiotic environment.. Furthermore, occupancy data is probably the most abundant and comprehensive form of data we have on biodiversity, covering many decades and most of the terrestrial and coastal environment . Another advantage of occupancy data is that it can easily be standardised, aggregated, and harmonised with environmental variables, enabling deeper insights and improved monitoring capabilities. This presentation explores advancements in integrating biodiversity and environmental observation data through the use of automated workflows and biodiversity occupancy cubes. By leveraging these tools, data inconsistencies can be identified and addressed, facilitating reproducible and scalable analysis aligned with FAIR principles. One of our aims is to enable collaborative, cost-effective processing, supporting the rapid transformation of primary data into usable knowledge. This is particularly relevant for rapid alert systems on biodiversity, as well as for delivering cost-effective solutions for biodiversity monitoring and policy reporting.. The B-Cubed project, funded under Horizon Europe, exemplifies these principles by fostering interoperability between in situ biodiversity observations, remote sensing and other environmental datasets. Through the development of open-source workflows and tools, B-Cubed aims to democratise biodiversity data products, reducing analytical burdens and supporting global biodiversity assessments. By integrating biodiversity data into the broader environmental observation landscape, this approach facilitates informed policymaking, enabling swift responses to pressing challenges such as climate change, biological invasions, and biodiversity-related disease outbreaks. Among its objectives, the project also focuses on future biodiversity modeling. To this end, we developed the Suitability Cube, a structured, multi-dimensional array that integrates environmental data from diverse sources—such as the Copernicus Program and WorldClim—and organizes it across key ecological dimensions, including species occurrences, spatial coordinates, temporal scales, and suitability scores. This format simplifies the modeling of species distributions under current and future global change scenarios, providing crucial insights to guide conservation strategies.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: An Earth Observation- and Insect-based Framework for Biodiversity Intactness Reporting in Africa

Authors: Tobias Landmann, Mrs Faith Ashiono, Mr Vincent Magomere, Dr Komi Mensah Agboka
Affiliations: International Centre of Insect Physiology and Ecology (ICIPE)
We pioneer the integration of multi-sensor Earth Observation (EO) data with curated insect occurrence datasets from citizen science, GenBank, and in-house databases to monitor insect-based biodiversity intactness and ecosystem vulnerability across Africa. Insects, being the most abundant taxa, are excellent biodiversity indicators due to their sensitivity to global change drivers such as unsustainable farming practices, urbanization, and logging. Moreover, they occupy diverse micro-habitats and are present across all climate zones. Leveraging on high-resolution EO and drone data, fine-scale spatial indicators to map insect micro-habitats and habitat suitability over time and space can be effectively developed. These EO-based insect diversity indicators were found to be highly suitable for assessing overall ecosystem biodiversity status (Landmann et al., 2023). The UN Convention on Biological Diversity (CBD) emphasizes the need for scalable, unbiased biodiversity indicators that integrate drivers of biodiversity loss, planetary boundaries, and ecosystem service assessments. Similarly, the Kunming-Montreal Global Biodiversity Framework calls for tools to link biodiversity loss with ecosystem integrity. Despite these global efforts, wide-scale data on invertebrate biodiversity loss remains unavailable for Africa. To address this gap, we collated comprehensive datasets on Lepidoptera (butterflies and moths; n = 18,300), Odonata (dragonflies; n = 12,300), and Coleoptera (beetles; n = 15,332). Predictor variables included spectral indices from 10–20 m Sentinel-2 imagery, 25 m canopy heights data from GEDI (Global Ecosystem Dynamics Investigation), and 4 km climate data from TerraClimate. Using a regression boosting model, we predicted insect diversity (iD) patterns for each order and across all orders (scaled from 0 to 1). The iD predictions were compared with potential pre-human impact diversity patterns from biome distribution models (Hengl et al., 2018). Herein, the potential (or prehuman) insect diversity (p) values for major habitat types were estimated as follows: Tropical and coastal forests (p = 1.0), wetlands (p = 0.9), savanna (p = 0.8), shrublands (p = 0.7), grasslands (p = 0.6), and deserts (p = 0.5). Diversity model accuracies exceeded 0.86, and the resultant insect-based intactness maps (current insect diversity divided by prehuman insect diversity) correlated significantly (p < 0.05) with global forest intactness products. For example, mean biodiversity intactness in Namibia was 75%, indicating a 25% decline in native insect abundance compared to the pre-human period. In Senegal, intactness was lower, at 37%. This new insect-based biodiversity intactness product offers a valuable tool for national biodiversity conservation programs and ecosystem restoration initiatives. It can support biodiversity status reporting, prioritizes restoration efforts, and informs actions to maintain ecosystem services such as pollination. Efforts are underway to facilitate policy uptake of the results using policy endowment and biodiversity focal points in individual African countries. Hengl T, Walsh MG, Sanderman J, Wheeler I, Harrison SP, Prentice IC. 2018. Global mapping of potential natural vegetation: an assessment of machine learning algorithms for estimating land potential. PeerJ 6:e5457 https://doi.org/10.7717/peerj.5457 Landmann, T.,, Schmitt, M., Ekim, B., Villinger, J., Ashiono, F., Habel, J. C., & Tonnang, H. E. (2023). Insect diversity is a good indicator of biodiversity status in Africa. Communications Earth & Environment, 4(1), 234.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Multisensor Approach for Quantifying Floral Resources in Hedgerows at Regional Scale

Authors: Julien Radoux, Léna Jeannerod, Maxime Buron, Pr Anne-Laure Jacquemart, Pr Yannick Agnan, Pr Pierre Defourny
Affiliations: Université catholique de Louvain - Earth and Life Institute
The decline of pollinators is of major concern for the resilience of several ecosystems and for sustaining food production of major crops. A large number of plant species indeed depend on these pollinators to complete their life cycle. Several traits of wild bees make them particularly efficient pollinators. Among the different factors affecting the fitness of wild bee colonies, the availability of pollen and nectar at the different stages of growth of the colonies plays a major role. In this study, the amount of pollen and nectar coming from flowering hedges is estimated by combining field observation, airborne Lidar, airborne RGB images and spaceborne optical images. Samples of pollen and nectar are collected on the field and analyzed in the lab to determine their nutritional quality. The quantity of these floral resources is measured per flower, and the number of flowers per cubic meter of hedges is estimated. In order to predict the pollen and nectar availability at the scale of the landscape, it is then necessary to use remote sensing data. First, flowering ligneous vegetation are classified within a deep learning framework on very high (25 cm) RGB orthophotos. RetinaNet, MMDetection and SingleShotDetection are compared to detect flowering hedgerow species on yearly mosaics of Wallonia (approximately 16900 km²). The best method is selected based on its area under ROC curve with a calibration dataset obtained by photointerpretation. The validation is performed on an independent dataset composed of 20 sites with field surveys and additional photointerpretation at random locations. The orthophoto mosaics used as input are however composed of different flight at different dates from the end of winter to the middle of summer. Because the flowering period of ligneous vegetation is relatively short in time, the optimal dates for the species of interest are selected across 5 consecutive years of acquisition. Second, Lidar data (50 cm resolution) is used to compute the volume of the flowering hedges and delineate them more precisely. This information is combined with the results of the field survey to determine the available resources per square meter on the ground. These values are downscaled with a floating circular average of 500 m radius, which corresponds to the average foraging distance by wild bees in the literature, in order to highlight the nectar and pollen resources from flowering hedgerows on the whole Wallonia. As mention above, the short period of time during which the flowering occurs hinder the detection of flowering hedges. Therefore we also used Sentinel-2 images for the subpixel detection of the flowering period. Based on the contributive proportion of the hedges inside the pixels (based on the point spread function) and assuming that surrounding pixels of the same land cover are homogeneous, it becomes possible to highlight when hedges are flowering or when they are “green”. The high temporal resolution of Sentinel-2 can then be used to estimate the duration of the resource availability. Unfortunately, this is only possible on cloud-free years, therefore we add to assume that there is no change in the hedgerows for a period of 5 years. The cumulated uncertainty is assessed from protein content to spatio-temporal extent of the hedges, highlighting the diverse sources of improvement. Nevertheless, estimates at landscape level demonstrate the major role of indigenous hedge species to sustain wild bee colonies.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Capabilities and Limitations of Sentinel-2 for Monitoring Invasive Plants: Ragweed (Ambrosia artemisiifolia) and False Indigo Bush (Amorpha fruticosa) Case Study

Authors: Ivan Tekić, Mr Branimir Radun, Ms Nela Jantol, Ms Ivona Žiža, Mr Ivan Tomljenović, Mr Vladimir Kušan
Affiliations: Oikon Ltd - Institute for Applied Ecology
The application of satellite-based remote sensing for detecting and monitoring invasive alien species (IAS) in Croatia remains largely underutilized. Current methods rely heavily on labor-intensive and costly field surveys, which can be inefficient and challenging over large areas. This study evaluates the capabilities and limitations of Sentinel-2 imagery for identifying, monitoring, and quantifying plant IAS, focusing on two problematic species—common ragweed (Ambrosia artemisiifolia) and false indigo bush (Amorpha fruticosa). Amorpha fruticosa, a perennial shrub that invades open flood-prone habitats, forms dense monocultures that suppress the growth of native vegetation and pose a significant biodiversity threat. These dense stands make it a suitable candidate for detection using satellite imagery, as they often cover large areas. However, the optimal flowering phase, which provides the strongest spectral signature for differentiation, is frequently obscured by cloud cover during early summer. In addition, the flood-prone nature of its habitat and forestry activities introduce environmental variability that must be accounted for when performing detection through time series. To overcome these challenges, late-summer Sentinel-2 imagery was utilized when conditions were more stable, allowing for clear data collection. The detection model focused on separating young and mature stands of A. fruticosa from co-occurring tree species such as oak (Quercus robur) and ash (Fraxinus angustifolia), which often intermingle with the shrub in forest clearings. Red-edge and near-infrared (NIR) indices, sensitive to chlorophyll content, enabled high differentiation between A. fruticosa and the surrounding vegetation. Dense monocultures were readily identified, and the model also performed well in more complex environments where intermixing with grass and young oak or ash trees occurred. The model achieved over 90% accuracy in distinguishing young and mature stands of A. fruticosa, emphasizing Sentinel-2’s capability to detect chlorophyll-rich vegetation effectively. Ambrosia artemisiifolia, an annual plant and the leading cause of allergic rhinitis in Croatia, primarily invades agricultural areas, fallow lands, and field edges. Unlike A. fruticosa, it does not form large, contiguous patches, often growing in narrow strips along field margins or roadsides. Sentinel-2’s spatial resolution of 10 meters poses a significant limitation for detecting these narrow, linear growth patterns. A time-series approach was employed to address the limitations of single-date imagery, leveraging Sentinel-2 images from August to November to capture the phenological changes of A. artemisiifolia. During August, the plant retains high water content, enabling differentiation from maturing crops. By October and November, its withered stems were distinguished from surrounding healthy vegetation. The model incorporated indices such as the Normalized Difference Infrared Index (NDII), Normalized Burn Ratio (NBR), and Red Edge Simple Index (REDSI), which utilized red-edge, NIR, and shortwave infrared (SWIR) bands to track changes in water and chlorophyll content. This approach successfully identified larger ragweed clusters within agricultural fields, achieving over 90% accuracy in heavily invaded areas. However, the model struggled to detect ragweed in urban environments, along narrow field margins, and in sparsely covered areas. This study demonstrates Sentinel-2’s potential for detecting both annual and perennial invasive species. While indices derived from red-edge, NIR, and SWIR bands show strong potential for distinguishing A. fruticosa and A. artemisiifolia from other vegetation, challenges such as cloud cover, spatial resolution, and species intermixing highlight the limitations of this approach. Future work should focus on expanding the temporal range of analysis, incorporating additional ground truth data, and refining models to improve performance in complex and mixed environments. The findings of this research provide a foundation for developing reliable, near-real-time services to map spatial extent, monitor trends, and inform effective management strategies for invasive species.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Novel Applications of Wildlife Population Estimation Methods to Satellite Imagery

Authors: Rebecca Wilks, Stuart King, Professor Ruth King, Dr Niall McCann, Dr Michael Chase, Dr David Williams, Dr Murray Collins
Affiliations: University Of Edinburgh, National Park Rescue, Elephants Without Borders, University of Leeds, Space Intelligence
Wildlife population estimation methods are a key branch of statistical ecology, enabling statistically rigorous estimates of population counts (abundance)[1]. These methods have a long history of being applied to data commonly collected in a conservation context such as camera trap images, acoustic surveys and transect flight surveys. We investigate two use cases involving novel ideas developed from abundance methods in statistical ecology, applied to satellite imagery of wildlife. Firstly, we present a framework for abundance estimation via satellite surveys of large wildlife in large-scale heterogeneous landscapes. Traditionally, wildlife surveys are undertaken using a time-consuming aerial process underpinned by distance sampling techniques, and therefore satellites which easily image huge areas are attractive for consideration, since they may represent a cost/effort saving. This is further fuelled by the recent demonstration by Duporge et al [2] of the use of CNN-based object detection to automate detection of endangered African Savannah Elephants in very high-resolution (VHR) 30cm satellite imagery (Pleaides Neo by Airbus, Worldview by Maxar). However, such satellite detections alone are not sufficient to provide robust abundance estimates. By design, wildlife detections from a point-in-time satellite image differ from detections from an aircraft moving through the landscape, and this change of observation method must be accounted for in the abundance estimation framework. Although satellites have already been used to detect populations of several species including Emporer Penguins [3], Polar Bears [4], and Wildebeest [5], these focus on groups usually found out in open terrain, meaning counts can be treated as a total count. We therefore provide the first theoretical framework for an end-to-end satellite abundance survey of large wildlife over large heterogeneous areas, which accounts for survey design (stratification), imperfect automated object detection, and partial obstruction of wildlife to the satellite (availability). Secondly, we investigate whether Capture-Recapture (CR) abundance methods can be used to obtain confidence bound counts for object detection in satellite imagery. Object detection algorithms have been applied to detect various features within satellite imagery such as cars[6] and ships[7], yet whilst object detectors are powerful, they are also prone to false negatives (missed objects) and false positives (wrong objects). They do not provide rigorous confidence bounds on these quantities, and so raw detection counts are traditionally corrected using ad-hoc methods, such as using precision and recall rates calculated on a test set. CR is an extremely common method within statistical ecology for estimating total population sizes. It consists of capturing and marking a sample of individuals from a population, then recapturing a new sample at a second observation time, and noting which marked individuals were re-captured. Traditionally, this requires physically marking individuals in-the-field (e.g. leg rings for birds [8]), however recently individuals for some species have been identified in imagery purely by their distinctive markings, for example Manta Rays [9]. CR methods then enable a confidence bound estimation of total counts, rigorously accounting for false negatives (missed animals) which are an inevitability when surveying wild animal populations. We take the CR principles of multiple observation occasions, and apply this in a novel way to object detection of generic objects in a single image. Using different object detection algorithms as individual observers, we draw links with ensemble modeling and investigate whether an extended CR methodology can be applied to generate confidence bounds for objects in object detection. References [1] Ruth King and Rachel McCrea. “Chapter 2 - Capture–Recapture Methods and Models: Estimating Population Size”. In: Handbook of Statistics. Ed. by Arni S. R. Srinivasa Rao and C. R. Rao. Vol. 40. Integrated Population Biology and Modeling, Part B. Elsevier, Jan. 1, 2019, pp. 33–83. doi: 10.1016/bs.host.2018.09.006. [2] Isla Duporge et al. “Using very-high-resolution satellite imagery and deep learning to detect and count African elephants in heterogeneous landscapes”. In: Remote Sensing in Ecology and Conservation 7.3 (Sept. 1, 2021). Publisher: John Wiley & Sons, Ltd, pp. 369–381. issn: 2056-3485. doi: 10.1002/rse2.195. [3] Peter T. Fretwell et al. “An Emperor Penguin Population Estimate: The First Global, Synoptic Survey of a Species from Space”. In: PLOS ONE 7.4 (Apr. 13, 2012). Publisher: Public Library of Science, e33751. issn: 1932-6203. doi: 10.1371/journal.pone.0033751. [4] Seth Stapleton et al. “Polar Bears from Space: Assessing Satellite Imagery as a Tool to Track Arctic Wildlife”. In: PLoS One 9.7 (July 2014). Num Pages: e101513 Place: San Francisco, United States Publisher: Public Library of Science Section: Research Article, e101513. doi: 10.1371/journal.pone.0101513. [5] Zijing Wu et al. “Deep learning enables satellite-based monitoring of large populations of terrestrial mammals across heterogeneous landscape”. In: Nature Communications 14.1 (May 27, 2023). Number: 1 Publisher: Nature Publishing Group, p. 3072. issn: 2041-1723. doi: 10.1038/s41467-023-38901-y. [6] Sébastien Drouyer. “VehSat: a Large-Scale Dataset for Vehicle Detection in Satellite Images”. In: IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. ISSN: 2153-7003. Sept. 2020, pp. 268–271. doi: 10.1109/IGARSS39084.2020.9323289. [7] Z. Hong et al., "Multi-Scale Ship Detection From SAR and Optical Imagery Via A More Accurate YOLOv3," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 6083-6101, 2021, doi: 10.1109/JSTARS.2021.3087555. 8] “Bird Banding — Learn Science at Scitable.” Cleminson, A. & Nebel, S. (2012) Bird Banding. Nature Education Knowledge 3(8):1 [9] Edy Setyawan et al. “Population estimates of photo-identified individuals using a modified POPAN model reveal that Raja Ampat’s reef manta rays are thriving”. In: Frontiers in Marine Science 9 (Nov. 15, 2022). Publisher: Frontiers. issn: 2296-7745. doi: 10.3389/fmars.2022.1014791.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.96/0.97)

Presentation: Uncertainties in Remote Sensing of Biodiversity: Definitions, Sources and Methods

Authors: Christian Rossi, Andreas Hueni, Tiziana, L. Koch, Maria, J. Santos
Affiliations: University of Zurich, Swiss National Park
Recent advances in remote sensing of biodiversity and biodiversity-related products have significantly enhanced our capacity to monitor and understand biodiversity. Typical remote sensing products directly related to biodiversity are spectral features and plant traits, and their diversity in space, i.e., spectral diversity and functional diversity. Hence, remote sensing of biodiversity involves measuring biophysical quantities from signals recorded by a sensor in response to radiation reflected from the Earth’s surface. As for any other measurements, the biodiversity quantities measured via remote sensing are inherently uncertain. Starting from the digital numbers recorded by the detector, the processing to obtain surface reflectance products, to the final biodiversity output, various sources of uncertainty can arise. For example, uncertainties related to the sensor and preprocessing of remote sensing data can account for as much as 10% in the near- and shortwave infrared regions where there is less solar radiation and thus inherently lower signal-noise ratios. After applying atmospheric corrections, spectral regions highly sensitive to water vapor can display uncertainties of up to 20%, which can increase through the process that leads to the derivation of biodiversity products. Failing to account for such uncertainties may lead to over- or underestimates of diversity, with downstream repercussions on management strategies and policy making. Nevertheless, uncertainties are rarely quantified in remotely sensed biodiversity products, limiting our understanding of biodiversity processes and their detection. Sparse quantification of uncertainties is further exacerbated by the confusion arising from the inconsistent and improper use of uncertainty terms. Here, we clarify the concept of uncertainty by defining what it is and what it is not, outline its typologies, and introduce metrological principles in the remote sensing of biodiversity. We highlight sources of uncertainty and provide examples of uncertainty estimation and propagation in remotely sensed biodiversity products. Finally, we discuss the critical need for product uncertainty requirements and reliable reference measurements. In particular, uncertainties are needed to compare and consolidate different quantities being measured and support the evaluation of product conformity. Providing uncertainties is essential for effectively and consistently communicating the strengths and limitations of remote sensing products.
Add to Google Calendar

Tuesday 24 June 16:15 - 17:45 (Room 0.11/0.12)

Session: A.01.10 Copernicus Sentinel-5P 7.5 Years in Orbit: Mission Status

Copernicus Sentinel-5P had on Oct. 13 2024 its 7 years launch Anniversary. The aim off this session is to inform the community about the excellent in orbit performance of this mission, its expected lifetime, details about data acquisition, processing, operational product QA monitoring, data dissemination and several scientific highlights achieved so far.

Session Schedule:


In Orbit Functional Performance and Lifetime Evaluation of the Sentinel-5P mission


  • K. Symonds - ESA

Sentinel-5P Mission Operations - a Success Story


  • D. Mesples - ESA

Global Atmospheric Composition Changes Observed by TROPOMI on Sentinel-5 Precursor


  • P. Veefkind - KNMI

TROPOMI Sentinel-5P SWIR Highlights, in Relation to Policy and Action


  • I. Aben - SRON

New Era of Air Quality Monitoring over Europe: Combining Daily Sentinel-5 Precursor and Hourly Sentinel-4 Observations


  • D. Loyola - DLR

Advances in Sentinel-5 Precursor Air Quality Data Products and their Validation


  • M. van Roozendael - BIRA/IASB
Add to Google Calendar

Tuesday 24 June 16:30 - 16:50 (EO Arena)

Demo: A.01.16 DEMO - How to add your own forward model in the GRASP version 2.0.0 retrieval framework

GRASP (Generalized Retrieval of Atmosphere and Surface Properties, Dubovik et al., 2021) is a flexible tool, designed to retrieve aerosol, gas and surface properties for a wide variety of sensors and the combination of them. It is a proven tool, applied to a wide range of different combinations of instruments, for example: active and passive (lidar and sunphotometer), spectrometers and photometers (Pandora and AERONET), Multi-Angular Polarimeters and hyperspectral sensors (CO2M/MAP and CO2M/CO2I, S5/UVNS and 3MI).
In the framework of the OPERA-S5 project, GRASP version 2.0.0 has been developed to transform the original code into a totally modular architecture in which every forward model part can easily be replaced. GRASP version 2.0.0 allows the user to include in the GRASP code new radiative transfer schemes, new surface models, AI based approaches or any other innovative modelling code. The interfaces and the tools around GRASP version 2.0.0 have been designed to allow a very user friendly experience to facilitate the scientists the adaptation and extension of GRASP possibilities to their specific needs and new ideas.
During the tutorial session,users will get familiar with GRASP version 2.0.0 possibilities, by following a step-by-step guide in which all participants will implement a new forward model in GRASP, including how to access the code, the input, the output, the internal interfaces. In order to make the session as agile as possible, the activity will be carried out in the DIVA platform (https://cloud.grasp-sas.com/). This is an Jupiter-notebooked based virtual environment, accessible from the browser, with all the configuration and tools already pre-installed that the users will use as the baseline for the developments.

Speakers:


  • Masahiro Momoi
  • Marcos Herreras-Giralda

Add to Google Calendar

Tuesday 24 June 16:52 - 17:12 (EO Arena)

Demo: D.03.33 DEMO - RACE Dashboard Demonstration

The RACE Dashboard is a joint initiative of ESA and EC DG-DEFIS to illustrate new indicators on economy, society and the environment, based on Earth Observation.

It is accessible at race.esa.int.

This demonstration will showcase how the RACE Dashboard integrates industrially provided indicators. The focus will be on demonstrating the novelty and innovation of the indicators, as well as the mechanisms by which they are provided to the RACE dashboard, and the various business models - supported by the Network of Resources.
Selected examples will illustrate the high diversity of services and capability in European industry, including, e.g. for environmental monitoring, health and pollution, natural disasters management, agriculture, and many more.

The demonstration will also include elements of gamification and storytelling.
Add to Google Calendar

Tuesday 24 June 17:00 - 17:45 (ESA Agora)

Session: F.02.19 Austrian Space Cooperation Day - Earth Observation

The Austrian space community and international testimonials take a kaleidoscopic look at products and services “made in Austria”, highlighting existing and inviting future cooperations within international partner networks. With a view to the ESA Ministerial Conference in 2025, the great importance of ESA programmes for maintaining and improving Austria's excellence in space will be explained using technological and commercial success stories. In the FFG/AUSTROSPACE exhibition, Earth observation space hard- and software products manufactured in Austria are presented (next to Agora area and ESA booth in Main Entrance Hall).

Chairs:


  • Christian Briese - EODC & AUSTROSPACE
Add to Google Calendar

Tuesday 24 June 17:15 - 17:35 (EO Arena)

Demo: D.03.28 DEMO - Lexcube viewer: Interactive Data Cube Visualization – using Lexcube as standalone or in a Jupyter notebook

Lexcube is an open-source tool designed for interactive visualization of 3D data cubes either as stand-alone application or within Jupyter notebooks. It enables Earth system scientists to explore large, high-dimensional datasets efficiently. The interactive version allows to integrate Lexcube seamlessly into Python-based workflows. By leveraging chunked data access, caching, and LZ4 compression, Lexcube ensures real-time interaction even with large-scale datasets.

A key component of the tool is its interactive 3D visualization capabilities, allowing users to explore, manipulate, and extract insights from data cubes. Participants will learn to navigate core functionalities, including dynamic selection of spatial and temporal subsets, customizable colour maps, and exporting visualizations and sub-cubes for further analysis. Unlike traditional 2D visualization tools, Lexcube enables intuitive inspection of complex, multidimensional data for model evaluation, anomaly detection, and scientific discovery. By attending this session, participants will gain hands-on experience with Lexcube and Lexcube for Jupyter, learning how to apply it to their research while exploring its latest features and developments.

Speaker:


  • Maximilian Söchting - Uni.Leipzig
Add to Google Calendar

Tuesday 24 June 17:37 - 17:57 (EO Arena)

Demo: D.04.26 DEMO - Accessing Copernicus Contributing Missions, Copernicus Services and other complementary data using CDSE APIs: OData, STAC, S3, OGC, openEO

{tag_str}

Copernicus Data Space Ecosystem offers a wide portfolio of data sets complementary to the “core” Sentinel products. They characteristics may differ from the Sentinel data sets and some of them may not be available in all of the CDSE APIs. The aim of this demonstration session is to facilitate usage of the complementary datasets in the CDSE platform by explaining the main differences between them and Sentinel data based on selected data access scenarios. Code snippets in the CDSE JupyterLab will be provided to allow CDSE users to utilize them in their own applications.

Speaker:


  • Jan Musiał - CloudFerro
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: F.01.03 - POSTER - Trends in Earth Observation Education and Capacity Building: Embracing Emerging Technologies and Open Innovations

Education activities in recent years have undergone a significant transformation related to the global digitalization of education and training. Traditional teaching methods, like face-to-face trainings provided to small groups of students, are being complemented or even replaced by massive open on-line courses (MOOCs) with hundreds of participants following the course at their own pace. At the same time, the Earth observation sector continues to grow at a high rate; in Europe, the European Association of Remote Sensing Companies (EARSC) reported in 2023 that the sector grew by 7.5% in the past 5 years.
This session will cover new trends in modern education in the Space and EO domains as well as methods, use cases, and opportunities to cultivate Earth observation literacy in diverse sectors, such as agriculture, urban planning, public health, and more. It will focus on new methods and tools used in EO education and capacity building, such as: EO data processing in the cloud, processing platforms and virtual labs, dashboards, new and innovative technologies, challenges, hackathons, and showcase examples which make successful use of EO data. Participants will also have opportunity to share and discuss methods for effective workforce development beyond typical training or education systems.
Based on the experience of Space Agencies, international organisations, tertiary lecturers, school teachers, universities and companies working in the domain of space education, this session will be an opportunity to exchange ideas and lessons learnt, discuss future opportunities and challenges that digital transformation of education has brought, consolidate recommendations for future education and capacity building activities, and explore opportunities to further collaborate, build EO literacy in new users outside of the Earth and space science sector and expand the impact of EO across sectors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advanced Environmental Assessment: Integrating Satellite and IoT Data

Authors: Valeria Pia Vevoto, Massimiliano Ferrante, Francesco Mauro, Silvia Liberata
Affiliations: University of Sannio, European Space Agency (ESA)
This study investigates the integration of electronics, Internet of Things (IoT) sensors, Earth Observation data, software development, and machine learning (ML) algorithms to enhance environmental monitoring. Regarding the IoT sensors, an IoT device called AQP (Air Quality Platform) has been developed. The AQP is an assembly kit (hardware and software) capable of sensing local ambient parameters and sending data to an ESA central server. It was designed and built internally at ESA ESRIN EO Laboratory for the ESA Living Planet Symposium 2019 School Laboratory, for educational purposes. Currently, the AQP project consists of about 200 AQP platforms spread across Europe (https://aqp.eo.esa.int/map/ ). The primary objective of this activity is to assess the quality of AQP data and explore correlations among AQP, ARPA, and Sentinel-5P measurements. An electronic interface was developed alongside software implemented in Python and C to facilitate data transmission to the ESA web server. We evaluate the visualization and management of in-situ data collected from the ARPA Lazio website and the AQP, as well as remote data from Sentinel-5P. Additionally, we apply ML techniques, specifically the CatBoost algorithm, to analyse the correlation between nitrogen dioxide levels detected by ARPA and Sentinel-5P in the Rome area during 2024. The results underscore the potential of combining these methodologies for improved accuracy in environmental assessments. Human and technological progress has responded to this problem through concrete actions, and therefore should not overlook the outcomes that climate change poses. As a result, this study aims to present the analysis of remote and in situ data both through graphs and through the use of ML techniques, also proposing future goals for the improvement of both data acquisition technique sand tools so can be monitored the environmental pollutions over the years. After a little description of the Copernicus space mission and the Sentinel-5P satellite and a little description of Italian national environment protection system ARPA the focused is on describing the architecture and purpose of the ESA Air Quality Platform (AQP). The core not only showing the data collected by the ground stations (ARPALAZIO and AQP) and the Sentinel-5P satellite, but also the correlation between them. To enable a more accurate estimation of in situ values from the parameters the ML algorithm Catboost was used. Finally, was described, in addition to conclusions and proposals for future activities. The data acquisition process has been divided into two parts: the first involves the collection of in-situ data (from AQP IoT sensors and ARPA Lazio), while the second focuses on the acquisition of remote sensing data (from ESA Earth Observation data, specifically Sentinel-5P). The in-situ data was obtained from the official websites of ARPA Lazio and the ESA web server for the AQP. ARPA Lazio is responsible for the institutional task of monitoring the environmental situation across the region, with 54 stations distributed throughout. Each station is assigned an identification number and can monitor a variety of pollutants, depending on its location. To facilitate comparison, data from the stations located in the Rome agglomeration were selected and compared with the in-situ data from AQP devices in the same area. For each pollutant, data was downloaded in .txt format covering the period from 1999 to 2023. Using a Python script, the data was transformed into a Data Frame, and a merge operation was performed to combine it into a single database. The merged data was then converted into .csv format. This process was repeated for pollutants common to both AQP and Sentinel-5P data (NO2, SO2, CO, O3), as well as for PM10, a harmful pollutant present in high concentrations in the atmosphere. After processing the .csv files for each pollutant, the correlation between the various ARPA stations was calculated, revealing a strong correlation. To explore the relationship between in-situ data and remote Sentinel-5P data for the ARPA stations, a virtual reference station was created, termed the "reference station." Regarding the AQP project, real-time and historical data can be downloaded from the official ESA website. By selecting the desired platform, users can choose the start and end dates and download the data in CSV format, which includes the station number, location, and pollutant values. After establishing the AQP reference station, data for the period from April to May 2024 was downloaded, focusing specifically on the newly installed sensors: CH4, O3, and HCHO. The data is provided every 60 seconds, with daily, monthly, and annual averages available for the selected parameters. For what it concerns EO data, the EO Browser is the tool used to download data from Copernicus satellites, specifically Sentinel-5P data in our case. The EO Browser allows users to instantly visualize satellite data or download it based on their preferred configuration. For Sentinel-5P data, users can navigate to the area of interest, select pollutants from a list (AER AI, CH4, CO, HCHO, NO2, O3, SO2), and choose the desired time range. After inspecting the data in the browser, the user can download statistical information in a CSV file. By analyzing the data from both in-situ stations (AQP and ARPA) and remote sensing (Sentinel-5P), correlations between the datasets can be determined. This activity led to the successful acquisition of data, resulting in important technical conclusions. The AQP project has proven to be a valuable tool for citizen science applications. Notably, there was a strong correlation—greater than 0.90—between the ARPA and AQP data for digital temperature, humidity, pressure, and particulate sensors. A good correlation (approximately 0.7) was also observed for the analog sensors, particularly for carbon monoxide, ozone and nitrogen dioxide. In terms of time sampling, the AQP, which acquires data at one-minute intervals, performs better than ARPA, which collects data once a day. It is important to emphasize that while the AQP can record all statistical values within a day (maximum, minimum, average, standard deviation, and trends), ARPA Lazio provides only a single data point, limiting the ability to understand the dynamics of events. However, one challenge encountered was the difficulty in establishing a valid correlation with methane, as the Sentinel-5P satellite did not acquire a significant amount of data during the March-April 2024 period. Despite this, the research has laid the groundwork for future work that could improve upon the current data in just a few months. Future efforts could include installing new sensors on the reference AQP, calibrating existing sensors by interpolating current conversion curves with those from professional instruments, optimizing the ARPA reference, and using Sentinel-5P Level 3 products for improved correlation. The work completed thus far not only represents a concrete achievement but also serves as a solid foundation for the future. It has highlighted the importance of understanding the air quality we breathe and marks the beginning of a journey full of opportunities. REFERENCES [1] J. Awewomom, F. Dzeble, Y. D. Takyi, W. B. Ashie, E. N. Y. O. Ettey, P. E. Afua, L. N. Sackey, F. Opoku, and O. Akoto, “Addressing global environmental pollution using environmental control techniques: a focus on environmental policy and preventive environmental management,” Discover Environment, vol. 2, no. 1, p. 8, 2024. [2] K. Protocol, “Kyoto protocol,” UNFCCC Website. Available online: http://unfccc. int/kyoto protocol/items/2830. php (accessed on 1 January 2011), pp. 230–240, 1997. [3] C. Poirier, M. Hermes, and M. Aliberti, “The role of space-based data in European climate policies,” Acta Astronautica, vol. 214, pp. 439–443, 2024.5 [4] G. Kaplan and Z. Y. Avdan, “Space-borne air pollution observation from sentinel-5p tropomi: Relationship between pollutants, geographical and demographic data,” International Journal of Engineering and Geosciences, vol. 5, no. 3, pp. 130–137, 2020. [5] B. W. Bodah, A. Neckel, L. S. Maculan, C. B. Milanes, C. Korcelski, O. Ram´ırez, J. F. Mendez-Espinosa, E. T. Bodah, and M. L. Oliveira, “Sentinel-5p tropomi satellite application for no2 and co studies aiming at environmental valuation,” Journal of Cleaner Production, 2022. [6] G. Balsamo, A. Agusti-Parareda, C. Albergel, G. Arduini, A. Beljaars, J. Bidlot, E. Blyth, N. Bousserez, S. Boussetta, A. Brown et al., “Satellite and in situ observations for advancing global Earth surface modelling: A review,” Remote Sensing, vol. 10, no. 12, p. 2038, 2018. [7] M. Amann, Z. Klimont, and F. Wagner, “Regional and global emissions of air pollutants: Recent trends and future scenarios,” Annual Review of Environment and Resources, 2013. [8] I. Cohen, Y. Huang, J. Chen, J. Benesty, J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” Noise reduction in speech processing, pp. 1–4, 2009. [9] M. Francesco, R. Luigi, J. Fjoralba, S. Alessandro, and U. S. Liberata, “Estimation of ground NO2 Measurements from Sentinel-5P Tropospheric Data through Categorical Boosting,” 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAIN), 2023. [10] Earthscience, “Convert NO2 concentration in Sentinel-5P data from mol/m2 to μg/m3,” 2020. [Online]. Available: https://earthscience.stackexchange.com/questions/19391/convert-no2-concentration-in-sentinel-5p-data-from-mol-m2-to-%ce%bcg-m3-on-the-ground
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Earth Observation in the Framework of COSPAR Capacity Building

Authors: Jérôme Benveniste, Dr Carlos Gabriel
Affiliations: COSPAR
Scientific data, preserved in public archives and made freely accessible to researchers around the world, serve as a critical foundation for the capacity-building initiatives organized by the Committee on Space Research (COSPAR). Since its inception, COSPAR has actively sought to democratize space science by ensuring that developing countries have the opportunity to participate in and benefit from global scientific advancements. The COSPAR Capacity Building (CB) initiative plays a central role in this mission, aiming to enhance the scientific and technical capabilities of emerging countries by providing training, access to resources, and fostering international collaborations. One of the most significant ways COSPAR achieves this goal is through its organization of regional workshops across a wide range of space science disciplines. These workshops are designed to equip postgraduate students, young researchers, and emerging scientists with the necessary tools, skills, and knowledge to conduct high-quality scientific research, despite limited access to resources. COSPAR’s CB workshops, which typically last two weeks, provide hands-on training in space science, with an emphasis on using publicly available data and open-source tools. The goal is to ensure that participants, even with minimal resources, can conduct advanced research using basic computer equipment and internet access. By teaching participants how to analyze and interpret space science data, COSPAR enables them to continue their work independently, fostering sustainable research and development in their home countries. One of the focusses of COSPAR’s Capacity Building initiative is Earth Observation (EO) from Space. EO data, plays a crucial role in addressing global challenges such as environmental monitoring, climate change, and disaster management. COSPAR’s workshops train participants in processing and analyzing EO data to track changes in the Earth’s surface, atmosphere, and oceans. This enables scientists in developing countries to apply EO data to local issues. In addition, participants gain insight into the broader scientific, technological, and policy implications of using EO data for global decision-making. The COSPAR CB workshops have been held in various emerging countries, contributing to the creation of a global network of scientists skilled in EO data analysis. The focus on collaboration ensures that the knowledge gained extends beyond individual projects, contributing to global efforts to address research and development challenges. In addition to the workshops, COSPAR offers a Capacity Building Fellowship Program. This program supports former workshop participants by providing additional opportunities for research, collaboration, and professional development. Fellows are encouraged to continue their work and collaborate with international experts, ensuring that the skills and knowledge gained during the workshops have a lasting impact. The fellowship program also strengthens the global network of young space scientists, fostering ongoing collaboration and exchange of ideas. COSPAR’s recent Small Satellite Program further strengthens its capacity-building efforts. This initiative helps universities and research institutions in developing countries establish satellite laboratories, enabling them to design, build, and launch small satellites. These satellites provide a cost-effective means for engaging with space science, offering hands-on experience in satellite technology and data collection. The program encourages self-reliance and innovation, as well as international collaboration, helping developing countries build local expertise in satellite-based research. Looking ahead, COSPAR plans to expand its Capacity Building initiatives, with a continued focus on Earth Observation. Future workshops and summer schools will explore new applications of EO data in areas such as Oceanography, the Cryosphere, Atmospheric sciences, land Hydrology and water cycle, agriculture, urban planning, extreme events, etc. COSPAR’s Capacity Building initiative has been instrumental in strengthening space science in developing countries. Through its workshops, fellowships, and programs like the Small Satellite Initiative, COSPAR empowers young scientists to contribute to global space science, addressing critical environmental and sustainability issues while building lasting research capacity. The COSPAR Capacity Building initiative at large will be detailed with specific examples in Earth Observation and prospects for future co-sponsored Workshops and Summer Schools.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Progress made and future steps of the HYPERedu learning initiative

Authors: Theodora Angelopoulou, Arlena Brosinsky, Akpona Okujeni, Saskia Foerster, Katrin Koch, Vera Krieger, Sabine Chabrillat
Affiliations: GFZ, German Research Centre for Geosciences, Helmholtz Centre, German Environment Agency (UBA), German Weather Service (DWD), DLR German Aerospace Center, Space Agency, Leibniz University Hannover, Institute of soil science
The increasing availability of imaging spectroscopy data from sources such as EnMAP, PRISMA, DESIS, EMIT, and PACE has ignited widespread interest in hyperspectral data analysis across various fields. However, there is a notable shortage of accessible training courses and educational resources. To address this gap, HYPERedu was established in 2019 as part of the EnMAP science program, offering online learning for hyperspectral remote sensing. HYPERedu provides comprehensive, free learning materials tailored for students, researchers, and professionals in academia, industry, and public institutions, from master level upwards. These resources include annotated slide collections and hands-on tutorials using the EnMAP-Box software, available in PDF and video formats. Continuously expanding to meet diverse learning needs, these materials are increasingly integrated into university curricula, professional training programs, and self-directed learning paths. HYPERedu has developed a series of Massive Open Online Courses (MOOCs). The first MOOC, "Beyond the Visible: Introduction to Hyperspectral Remote Sensing", launched in November 2021, covers fundamental principles of imaging spectroscopy, sensor technologies, data acquisition techniques, and software tools and since 2024, this course is also available in German language. Designed for flexible, self-paced learning, the course requires 5–8 hours to complete, with participants earning a certificate and a diploma supplement upon completion. Subsequent MOOCs have focused on agricultural applications (2022), EnMAP data access and preprocessing techniques (2023), and soil applications (2024). Upcoming MOOCs will explore topics such as forestry, geology, and inland and coastal waters. All HYPERedu resources are hosted on the EO College platform, a hub for Earth Observation education, and are freely accessible under a CC-BY license. The MOOCs are available both as interactive online courses and downloadable offline documents (PDF format), allowing participants to engage with the material even without a stable internet connection.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Geospatial Intelligence for Sustainable Futures: Smart Data and AI Applications in Geographic Education

Authors: Lars Tum, Torben Dedring, Jun.Prof. Dr. Andreas Rienow
Affiliations: Ruhr University Bochum
In an era of rapid urbanisation and digital transformation, harnessing the power of geospatial data and artificial intelligence (AI) is essential to addressing global challenges such as sustainable urban planning, climate resilience, and environmental monitoring. This presentation highlights an integrated educational initiative designed to equip MSc and PhD students with essential skills in geospatial data analysis, AI-driven geographic research, and modern research data management practices. The program delves into the analysis of urban transformations through geospatial data sources such as volunteered geographic information (VGI), social media geographic information (SMGI), and Earth observation (EO) data. It explores how AI techniques, including machine learning and deep learning, can be applied to address critical geographic questions. Specific case studies include the use of Sentinel-1 data for flood detection, the application of OSMnx for road network analysis, and the implementation of convolutional neural networks (CNNs) for ship detection in radar imagery. Additionally, the program covers predictive modeling techniques for urban growth and environmental changes, such as water level predictions and housing market dynamics, showcasing AI's role in enhancing geospatial analysis for sustainability and resilience. At the core of the learning process are Python-powered Jupyter Notebooks and educational videos delivered in a Massive Open Online Course (MOOC) format. These resources provide learners with structured, interactive content, guiding them through theoretical concepts and practical applications. The videos contextualise complex topics with real-world examples, while the Jupyter Notebooks facilitate hands-on experimentation with datasets, algorithms, and neural network models. Together, these tools ensure learners can independently implement and adapt the methodologies in their own research and projects. These and many more educational resources are part of the NFDI4Earth project, a research initiative advancing digital transformation in Earth System Science (ESS). By following NFDI4Earth’s principles, the courses impart a simple, efficient, open, and FAIR (Findable, Accessible, Interoperable, and Reusable) approach to its learners, making ESS innovation-friendly and user-driven. Through a combination of theoretical instruction and practical exercises, participants master the processing, analysis, and visualisation of heterogeneous geospatial datasets, implement machine learning algorithms such as Random Forest and Support Vector Machines, and build simple neural networks for geographic applications. This initiative not only bridges the gap between traditional geospatial analysis and cutting-edge AI methods but also prepares students for independent, interdisciplinary research. By integrating tools, formats, and FAIR practices, this program contributes to the advancement of geospatial education and the responsible application of digital innovation in ESS.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Master's in Spatial Information Applications: Insights after 100 Graduates Across South America and Italy

Authors: Lic. Camilo Barra, PhD Elisabet Benites, Lic. Rodrigo Chani, Eng Luis Cruz, Lic Guadalupe Escalante, Lic Bernardita Rivera, Lic Manuel Zeballos, Lic Christian Escobares, Lic Julieta Motter, Lic. Silvana Palavecino, Msc Gaston Gonzales Kriegel, Eng. Veronica Schuller, PhD Santiago Seppi, Ximena Porcasi, PhD. Anabella Ferral, PhD Marcelo Scavuzzo, Phd Fernanda Garcia Ferreyra
Affiliations: Conae_ Instituto Gulich
The Master’s in Spatial Information Applications (MAIE), jointly organized by the Gulich Institute and the Faculty of Mathematics, Astronomy, Physics, and Computer Science (FAMAF), is an evolution of the Master’s in Space Applications for Early Warning and Emergency Response (MAEARTE), offered since 2009. This program has graduated over 100 professionals who actively contribute to research centers and organizations worldwide. Spanning two years, MAIE combines advanced coursework with research tutorships, immersing students in globally relevant projects. Its interdisciplinary approach encompasses areas such as agricultural and forestry resource management, meteorology and oceanography, environmental emergencies and monitoring, cartography, geological studies, and human health. Graduates are distinguished by their robust technical training, adaptability, and innovative capabilities—essential traits in the rapidly evolving field of space technologies. Located at the Gulich Institute within CONAE’s Teófilo Tabanera Space Center (Falda del Cañete, Córdoba, Argentina), MAIE provides a unique educational experience. Students collaborate with professionals from CONAE, ASI, and other national and international entities, and many have the opportunity to undertake training and research stays in Italy, supported by the Italian Space Agency (ASI) and the Italian Government. MAIE not only offers exceptional academic training but also fosters a diverse and collaborative community, with students from over 10 countries. This international and interdisciplinary focus, enriched by hands-on experience and strong ties to institutions such as ASI, positions MAIE as a leading program for training specialists in space technologies across Latin America and beyond. We celebrate MAIE's profound impact, not only in academic excellence but also in building a professional network committed to addressing the challenges of managing and applying space technologies for societal benefit.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Digital Geomedia in Vocational Education and Training: Blended learning concepts to promote sustainable development through modern geotechnologies

Authors: M.A. Tobias Gehrig, Dr. Maike Petersen, Prof. Dr. Alexander Siegmund
Affiliations: Institute for Geography and Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education, Germany, Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University, Germany
Current global challenges such as climate change, land transformation, and loss of biodiversity force educational systems to reinvent their approaches (Habibullah, Din, Tan, and Zahid, 2020). Emphasis on digitalization and Education for Sustainable Development (ESD) are two of the main trends currently gaining momentum (Ahel and Lingenau, 2020). While the need for digitalization has been acknowledged by most fields of education (from primary through tertiary education), ESD is rarely incorporated into the curricula of vocational training (Schmidt and Tang, 2020). However, as almost 500,000 people complete vocational training in Germany every year, they account for a substantial portion of the workforce (Federal Ministry of Education and Research, 2023). Digital geomedia, such as earth observation (EO), geographic information systems (GIS), and mobile geotools, offer an ideal opportunity to integrate Education for Sustainable Development and digitalization. Despite their considerable professional relevance, digital geomedia are not widely employed in vocational education and training. Indeed, they have thus far been of relatively little importance. Such tools offer trainees a robust connection to their immediate surroundings and have significant potential for professional and academic preparation (Ministry of Education, Youth and Sport Baden-Wuerttemberg, 2024). For example, digital geomedia, predominantly EO tools like Google Earth, can be employed not only in everyday contexts but also in professional settings, such as state spatial planning or market analysis for companies. Integrating EO skills into vocational training is crucial for managing and preserving cultural landscapes, such as traditional meadow orchards, which are vital for biodiversity and local ecosystems. By equipping trainees with EO skills, they can effectively use technologies like EO, including unmanned aerial systems (UAS) to monitor environmental changes, manage land use sustainably, and contribute to the conservation of these valuable landscapes. Innovative cooperation between science, the educational system, and training companies is essential to provide these skills and promote sustainable development within various sectors. Thus, the project DiGeo:BBNE focuses on embedding the use of digital geomedia to support sustainable practices in vocational training through blended learning. To achieve this, it developed and implemented hybrid teaching-learning settings that combine location- and time-independent e-learning programmes with practice-oriented learning on site. Modules were specifically designed to introduce trainees from various fields, such as landscape management, regional product marketing, and care professions, to digital geomedia. These include interactive e-learning modules on EO, GIS, and mobile geotools, conveying the basics of these technologies through differentiated examples. Additionally, various in-person courses on location analysis, business start-ups, and educational geocaches have been developed and conducted to combine learning with hands-on experience in the field. These courses have been tailored to meet the needs of vocational trainees and provide them with practical skills directly applicable to their future careers. A notable example is a course with trainees from a local automobile manufacturer in Neckarsulm. This course introduced UAS flight and planning for analyzing traditional meadow orchards in Neckarsulm. Trainees learned the basics of EO and UAS technology in a workshop, then collected and analyzed UAV imagery from a traditional meadow orchard. This hands-on approach not only provided practical skills but also highlighted the importance of EO technologies in preserving valuable cultural landscapes. The courses are evaluated with a particular focus on deep structures of teaching, such as cognitive activation, which involves engaging trainees in higher-order thinking processes. Additionally, the evaluations examine student motivation, assessing how the courses inspire and sustain their interest and enthusiasm for learning. The presentation aims to introduce the developed modules targeting vocational training in different sectors as a best-practice example. It will discuss challenges faced when approaching partners from vocational education and strategies to address these, such as time constraints, geography not being part of curricula, and scepticism toward geographical topics. By incorporating these skills into vocational training, we can better prepare trainees for the demands of the modern work environment. The presentation will also present initial results from the course evaluations. These results will help embed the use of digital geomedia within the vocational education system to promote sustainable economic action. Ultimately, this will become another pillar to equip Germany’s workforce with the skills needed to face current and future challenges. References: - Ahel, O., & Lingenau, K. (2020). Opportunities and Challenges of Digitalization to Improve Access to Education for Sustainable Development in Higher Education. In W. L. Filho, A. Lange Salvia, R. W. Pretorius, B. L. Londero, E. Manolas, F. Alves, . . . A. Do Paco, Universities as Living Labs for Sustainable Development. Supporting the Implementation of the Sustainable Development Goals (pp. 341-356). Berlin: Springer. - Federal Ministry of Education and Research. (2023). Report on Vocational Education and Training 2023. Bonn: BMBF. Habibullah, M. S., Din, B. H., Tan, S.-H., & Zahid, H. (2020). Impact of climate change on biodiversity loss: global evidence. Environmental Science and Pollution Research, pp. 1073-1086. - Ministerium für Kultus, Jugend und Sport Baden-Württemberg. (2024). Allgemeine Informationen zur Beruflichen Bildung. Retrieved from https://km.baden-wuerttemberg.de/de/schule/berufliche-bildung/allgemeine-informationen-zur-beruflichen-bildung - Schmidt, J. T., & Tang, M. (2020). Digitalization in Education: Challenges, Trends and Transformative Potential. In M. Harwardt, P. F.-J. Niermann, A. M. Schmutte, & A. Steuernagel, Führen und Managen in der digitalen Transformation. Trends, Best Practices und Herausforderungen (pp. 287-312). Berlin: SpringerGabler.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: GDA Knowledge Hub: A Platform To Support Global EO Capacity Building in International Development

Authors: Ravi Kapur
Affiliations: Imperative Space
The ESA Global Development Assistance (GDA) programme is focussed on accelerating uptake of Earth Observation within the International Development arena, and on ‘mainstreaming’ its use at all levels of the operational value chain in development funding and assistance. However, to achieve this through effective knowledge exchange and training, several obstacles must be overcome, including: • Lack of widespread awareness in the development and International Financial Institution (IFI) communities about the fundamental capabilities available from EO. • Lack of common language and unified taxonomies between the development and EO arenas. • The need to enable easy access to EO training resources for integration into existing IFI-based capacity building. • The need to access ‘certified’ EO expertise for specific development use case contexts To address these and other related challenges, Imperative Space has led the development of the GDA ‘Knowledge Hub’. This is a new platform that leverages clear UX design, carefully structured taxonomies, bespoke training support tools and cutting-edge LLM technology to enable rapid access to relevant information and training resources. This session will provide an overview of the GDA Knowledge Hub platform, its key features and available training resources, and how it sets a new template for providing sector-specific EO knowledge and capacity building support. Key features of the platform to be demonstrated in the session will include: Main GDA Knowledge Hub Library: This is an extensive and interactive repository of European EO Service Capabilities, Use Cases, Case Studies and associated Training Resources. It is intended to support a wide range of user needs and interests at every level of the international development value chain, and to support IFIs in their own EO capacity building and skills development activities in client states. It also includes an archive of GDA-linked webinars and publications. Training Module Assembler (TMA): The ‘TMA’ is an innovative bespoke new tool created for IFI-based trainers to gather, assemble, re-order and export training resources available within the Knowledge Hub. This simplifies the process of gathering valuable information for use in training activities in their own context. Additionally, users have the option to create personalised collections, allowing them to categorise and organise their saved resources by themes, projects, or specific areas of interest. There is an option to download any training content from the training modules directly to a device for offline use or project integration. Consultation Rooms: The ‘Consultation Rooms’ feature offers an interactive space for users to engage with expert knowledge and personalised assistance. It provides access to a multi-tiered support system, initially via an advanced NLP-based chatbot, but progressing to the option to book one-on-one training sessions with EO experts online.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: GEO ART – EARTH FROM SPACE: Earth Observation Data of Kruger National Park From 30 Years Captured on Canvas

Authors: Christiane Schmullius, Susen Reuter, Jussi Baade, Izak Smit
Affiliations: University Jena, Photography and Art, South African National Parks
In 1994, Kruger National Park was one of the scientific supersites of a radar remote sensing experiment onboard the spaceshuttle Endeavour, SIR-C/X-SAR. Fabulous unprecedent images were taken and later explored in cooperation projects of SANParks scientists and international Earth observation scientists. The cooperation was continued towards e.g. woody cover mapping and surface model calculations with the satellite generations to follow in the SANParks projects SARvanna 2008-2011, ARS-Africa-EO 2012-2017, COLD-EMS 2018-2023. These remote sensing scenes of Copernicus Sentinel-1 and Sentinel-2 as well as JAXA's PALSAR radar images illustrate the magnificent beauty of savanna surfaces. The rich colours of these false colour images stimulated the project “GEO ART – EARTH FROM SPACE”, which is an extraordinary meeting between art and science: https://www.susenreuter.com/themen/geo-art/. The German visual artist Susen Reuter (member of the Federal Association of Fine Arts) transformed the digital scenes into several big sized canvas paintings. The series of artworks is entitled "Landscapes in motion" and is constantly being expanded to include new motifs. Reuter uses several techniques like scumbling and pouring as well as different materials for her paintings, like gouache, acrylic and pigments. The GEO ART paintings were shown in several exhibitions in Germany, e.g. at the Friedrich Schiller University Jena and at the Nicolaus Copernicus Planetarium Nuremberg. The aim of the project is for art to act as a door opener to science and thus to enable the public to experience science in a completely different way. This talk will introduce the satellite images and explain the transition to the paintings and their magnificent attractivity to public viewers. In total, 12 paintings are being presented (also possibly in an exhibition on-site the LPS25-conference facilities!). The data sources and image processing techniques are reflected with respect to the transformations on canvas. Vice versa, the most obvious features in the acryl paintings are re-connected to their digital origins from the Earth observation satellites and the meaning of the remote sensing products.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: GATHERS project – multi-tool educational and networking experience

Authors: Maya Ilieva, Freek van Leijen, Ramon Hanssen, Norbert Pfeifer, Mattia Crespi, Iwona Kudłacik, Jan Kapłon, Grzegorz Jóźków, Witold Rohm
Affiliations: UPWr, TU Delft, TU Wien, Sapienza University of Rome
The GATHERS project ran between 2019 and 2024, and was funded by the European Commission program TWINNING Horizon 2020 (http://www.gathers.eu/). The main scientific goal of GATHERS was the development of a methodology for integration of geodetic and imaging techniques for monitoring and modelling the Earth’s surface deformations and seismic risk. The three techniques that founded the methodology were Interferometric Synthetic Aperture Radar (InSAR), Light Detection and Ranging (LiDAR) and Global Navigation Satellite System for seismological applications (GNSS-seismology). On the other hand, within the frames of the project we aimed to gather and train a strong group of young scientists with interests in the field of geodesy and remote sensing. The GATHERS project was an initiative between several European Universities and involved as partners the Delft University of Technology (TU Delft, Netherlands), Technische Universität Wien (TU Wien, Austria), Sapienza University of Rome (Sapienza, Italy) and Wroclaw University of Environmental and Life Sciences (UPWr, Poland). The latter one also served as Project Coordinator. The training plan, developed by the project partners, included various approaches for realising the project’s goals as the main activity were short to middle-term trainings of experiences researchers (ER), PhD and MSc students. The realisation of this mission was strongly influenced by the outbreak of the COVID-19 crisis which coincided with the start of the project. Nevertheless, the good cooperation between the project partners supported the attainment of 21 trainings – 8 ERs, 7 PhD and 6 MSc, performed in a hybrid mode. Most of the UPWr trained staff, as well as the mentors and lecturers from TU Delft, TU Wien and Sapienza, took part in the preparation of series of Summer/Winter Schools, Hackathons, virtual and in-person Roadshows as part of the Knowledge integration task. The COVID-19 pandemic posed multiple challenges in front of our lecturers in these initiatives. The GATHERS partnership developed several strategies to overcome the restrictions on human mobility and gathering imposed by the virus spread in 2020-2022. The analyses of the possible solutions, including virtual schooling, observed a strong fatigue from on-line studying during the global lockdown at the background of switching to close to fully virtual life in any sphere. The social isolation unlocked additional anxiety, insecurity and lowered the effectiveness. The need of socialising and team work was back on the table. As one of the main goals of the project was to create a geoscience alumni community, the post-pandemic period of the project (2022-2024) was devoted to designing of a subsequent and progressive strategy for achievement the planned events. The most prominent example of the strategy implementation was the organisation of the final so-called Super Event, which comprised an Advanced Winter School, based on two basic previous Summer schools, a workshop, a B2B meeting and a hackathon. The Super Event was in-person event hosted by Sapienza. Important elements for the successful and smoother realisation of the Super Event were: 1) evaluation on the feedbacks gathered during the previous basic editions of the GATHERS Schools and Hackathon; 2) conduction of virtual pre-school and pre-hackathon technical meetings for establishing the requirements for the programming skills and tools; 3) providing shared data spaces and platforms for the practical exercises data and tools; 4) proper formation of working groups based on skill level and expertise of the participants; 5) communication (shared messaging environment) during the events; 6) many in-person initiatives acting as ice-breakers and reinforcement of team work and collaborations; 7) fair and open communication and feedback gathering during and after the realisation of the events; 8) and last but not least the shared responsibilities beyond institutions for individual modules of the course were successful and deepened the cooperation between the project partners. The staff trainings fulfilled in hybrid mode resulted in 13 scientific publications, 3 defended PhDs and 1 MSc theses, 2 new PhDs started, increased scientific and educational capacity in each partners’ unit, including formation of a new generation of mentors and educators. On the other hand, the Knowledge Integration Activities – schools, hackathons, roadshows involved 150 BSc, MSc, PhD students and young researchers mainly from Europe. 24 of them took part in one Basic and the Advanced Schools building up their knowledge and skills, 25 – in School (1 or 2) and hackathon (1 or 2), demonstrating determent ability to apply the theory in practice. Nine of these remarkable young scientists continued their collaboration beyond by working on publishing their results. The GATHERS project is a valuable showcase for applying a thorough analysis supported by cooperative brainstorming among the main payers for providing flexible educational tools depending on the environment, needs and expectations of the target groups.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Echoes in Space – A Narrative Introduction to Radar Remote Sensing With 14 Exercise Blocks

Authors: Christiane Schmullius, Clemence Dubois, Robert Eckardt, Christian Thiel
Affiliations: University Jena, Deutsches Zentrum für Luft- und Raumfahrt
Essence of the Manuscript: Adventures of a female scientist's life explain milestones of a fascinating Earth observation methodology. Genre: Creative Non-Fiction Textbook. Topic: Introduction to the basics and applications of radar remote sensing. Content: Since 2015, a new fleet of European radar satellites (Sentinel-1A/B/C and following) has enabled operational applications of this Earth observation technique for environmental science. This brings a physically complex technology into everyday use in research and management. This textbook uses fascinating experiences to explain the basics of this Earth observation technique and provides an easy introduction to the almost unimaginable possibilities with advanced processing techniques and to the professional field. Planned book size: 500 pages with many illustrations (campaign photos, tabular overviews, exciting results graphics) and numerous QR codes for further browsing (web pages, animations, videos). Dates: Book publication in September 2025 in German and during 2026 in English. Audience: Earth observation has an expanding user community - not only in all environment-related sciences, but also in environmental planning and management. To date, radar remote sensing has deterred many users because the technology seems exclusive. This factual and textbook aims to overcome this perceived “inaccessibility”. The target audience therefore ranges from BSc and MSc students and PhD candidates to private and public sector stakeholders. It is intended to serve as a textbook for university lectures and seminars (including a semester-long compendium of 14 exercises), as well as to give interested laypersons an introduction to the possible applications. Market Analysis: There is no comparable book on the market – neither in German nor in English. Especially the idea of a narrative description of exciting technological milestones, in which the author herself was involved is novel: challenging airplane campaigns, three Space Shuttle missions, time-critical transport of a receiving station to Mongolia, innovative "orbital dance" of satellites in space, development of innovative environmental monitoring with data from the new European Copernicus satellite fleet in Germany and through international cooperations in China, Mexico, Siberia and South Africa. Marketing: In addition to the publisher’s advertising, trade fairs and conferences, use of own international networks and committees, especially in the remote sensing education sector (international learning platform eo-college.org and UN consortium EOTEC Devnet).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Teaching and Learning Remote Sensing with SNAP and Sentinel-2 Data – A case study from Anhalt University of Applied Sciences

Authors: Sophie Prokoph, Marvin Gabler, Josef Palmer, Arne Scheuschner, Prof. Dr. Marion Pause
Affiliations: Anhalt University Of Applied Sciences
Hands-on remote sensing data analysis is a key part within the undergraduate studies of surveying and geoinformatics at Anhalt University of Applied Sciences in Germany. At present, the majority of our students studying within a dual academic program and are supported by federal authorities and engineering companies in Germany. Therefore, the “dual”- students are an excellent interface to distribute and enhance knowledge and expertise about available Copernicus EO data for various fields of application. To increase motivation and quality of learning remote sensing in an academic environment, we developed a lecture concept which allows students to analyse their home region (or any area of individual interest) and provide material for scientific communication simultaneously. In the undergraduate lectures the focus is on optical remote sensing. The students gain knowledge about different data sources (sensors, platforms), data parameters, land cover classification and spectral indices. This content is deepened in the exercises using Senstinel-2 data and the open access software SNAP. How the concept works: At the beginning, there are two face-to-face introductory exercises to get familiar with SNAP and discuss different RS data products. This is followed by six exercises with Sentinel-2 data (free choice of study area) on the topics: understanding multispectral data, classification and spectral indices. Finally, two exercises are provided to show special areas of application for RS (i.e., learn to create a building mask, analyse the influence of irrigation and tillage). The results and insights of the individual student exercises are collected within a presentation and are finally evaluated by the lecturer.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The INTEGRAL Project: Synergies Between European and Asian Academia for Building Geo-Technologies Capacity Towards Resilient Agricultural Adaptation to Climate Change in Lao PDR

Authors: Marinos Kavouras, Eleni Tomai, Margarita Kokla, Vassilios Tsihrintzis, Christina Karakizi, Athanasia Darra, Maria Bezerianou, Ali Mansourian, Jean-Nicolas Poussart, Avakat Phasouysaingam, Phetsavanh Somsivilay, Nouphone Manivanh, Khandala Khamphila, Sisomphone Southavong
Affiliations: National Technical University of Athens, Lund University, National University of Laos, Savannakhet University, Souphanouvong University, Champasack University
Lao PDR’s economic growth is based primarily on natural resources, while the vast majority of the country’s population depends for employment and income on smallholder agriculture. However, agriculture in Lao PDR is increasingly affected by natural hazards such as droughts, floods, and erratic, intense rainfall, all of which have been exacerbated in recent years by the effects of climate change. Shifting agricultural practices toward climate-change resilience while ensuring food security is a vital priority for the country. Intelligent Geo-technologies, that combine Earth Observation, Geographic Information Systems, Artificial Intelligence and large-scale mapping, can offer a broad suite of tools and applications to support climate-resilient agricultural practices. However, the use of such crucial knowledge and tools in Lao PDR Higher Education Institutions (HEIs) is very limited. The main objective of the EU-funded project “Intelligent Geotechnologies for Resilient Agricultural Adaptation to Climate Change in Lao PDR - INTEGRAL” is to build the capacity of four Higher Education Institutions in Lao PDR in the field of resilient agriculture towards the country’s effective adaptation to climate change by exploiting a broad toolkit of geospatial technologies and by introducing and putting into practice hybrid teaching techniques such as physical/ in-campus as well as distance learning, and at the same time, promoting project/problem/investigation-based learning. The project consortium consists of two European and four HEIs in Laos, namely, the National Technical University of Athens in Greece and the Lund University in Sweden, the National University of Lao, the Souphanouvong University, the Savannakhet University, and the Champasack University, in Lao PDR. The INTEGRAL project started in February of 2023 with a duration of 36 months. The main achievements of the project so far can be summarized as follows: i. The development of four innovative modular courses equal to 7.5 ECTS each, that will be integrated in the curricula of the 4 participating Lao PDR HEIs; ii. The equipment of four Lao HEIs, resulting in the formation of four brand-new geo-technology laboratories. The acquired equipment comprises of a common core for all HEIs, as well as additional equipment tailored to each university’s specific needs. iii. The achievement of “Building training capacity” (BTC) agenda, through which several consortium-level, as well as local in Lao PDR, training events and activities have been implemented to enhance the knowledge and skills of Lao PDR HE staff. BTC activities have employed the new geo-technology laboratories and e-learning equipment acquired by universities in Lao PDR and are closely linked with the developed course material. The trained academic and administrative staff has already surpassed the designed target number. During the first half of the project, the commitment of Lao PDR beneficiaries to sustaining the project’s impact has been strongly demonstrated. Participating Lao PDR universities are very eager to integrate the capacity built and innovative courses developed by the project into their broader effort to update and enhance the relevance and appeal of their curricula and teaching methods. The Lao PDR HE system has acquired new competences, skills, infrastructure and resources to better equip graduates in interdisciplinary scientific fields and has developed or/and improved its potential for online and remote training, thereby extending their scope and including larger parts of the population. In the future, we envision that the impact will increase transversally once the four developed courses are taught in Lao PDR HEIs classrooms reaching wider audiences and being tested in vivo. ------- Funded by the European Union (Ref: 101082841 — INTEGRAL — ERASMUS-EDU-2022-CBHE). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Federated Learning Environment for Earth Observation Students: A Success Story from Austria

{tag_str}

Authors: Martin Schobben, Luka Jovic, Nikolas Pikall, Joseph Wagner, Clay Taylor Harrison, Davide Festa, Felix David Reuß, Sebastian Hahn, Gottfried Mandlburger, Christoph Reimer, Christian Briese, Matthias Schramm, Wolfgang Wagner
Affiliations: Department of Geodesy and Geoinformation, Technische Universität Wien, Earth Observation Data Centre for Water Resources Monitoring GmbH
Establishing an effective learning environment for Earth Observation (EO) students is a challenging task due to the rapidly growing volume of remotely sensed, climate, and other Earth observation data, along with the evolving demands from the tech industry. Today’s EO students are increasingly becoming a blend of traditional Earth system scientists and "big data scientists", with expertise spanning computer architectures, programming paradigms, statistics, and machine learning for predictive modeling. As a result, it is essential to equip educators with the proper tools for instruction, including training materials, access to data, and the necessary computing infrastructure to support scalable and reproducible research. In Austria, research and teaching institutes have recently started collaborating to integrate their data, computing resources, and domain-specific expertise into a federated system and service through the Cloud4Geo project, which is funded by the Austrian Federal Ministry of Education, Science, and Research. In this presentation, we will share our journey towards establishing a federated learning environment and the insights gained in creating teaching materials that demonstrate how to leverage its capabilities. A key aspect of this learning environment is the use of intuitive and scalable software that strikes a balance between meeting current requirements and maintaining long-term stability, ensuring reproducibility. To achieve this, we follow the Python programming philosophy as outlined by the Pangeo community. In addition, we need to ensure that the environment is accessible and inclusive for all students, and can meet the demand of an introductory BSc level course on Python programming as well as an MSc research project focused on machine learning with high-resolution SAR data. We accomplished this by combining the TU Wien JupyterHub with a Dask cluster at the Earth Observation Data Centre for Water Resources Monitoring (EODC), deployed close to the data. A shared metadata schema, based on the SpatioTemporal Asset Catalog (STAC) specifications, enables easy discovery of all federated datasets, creating a single entry point for data spread across the consortium members. This virtually “unlimited” access to data is crucial for dynamic and up-to-date teaching materials, as it helps spark the curiosity of students by opening-up a world full of data. Furthermore, the teaching materials we develop showcase the capabilities of the federated system, drawing on the combined resources of the consortium. These materials feature domain-relevant examples, such as the recent floods in central Europe, and incorporate scalable programming techniques that are important for modern EO students. These tutorials are compiled into a Jupyter Book, the “EO Datascience Cookbook”, published by the Project Pythia Foundation, which allows students to execute notebooks in our federated learning environment with a single click. Beyond serving as teaching material, the Jupyter Book also acts as a promotional tool to increase interest in EO datasets and their applications. We are already seeing the benefits of our federated learning environment: 1) it enhances engagement through seamless, data-driven storytelling, 2) it removes barriers related to computing resources, 3) it boosts performance by breaking complex tasks into manageable units, and 4) it fosters the development of an analytical mindset, preparing students for their future careers. We hope that this roadmap can serve as a model for other universities, helping to preserve academic sovereignty and reduce reliance on tech giants, such as Google Earth Engine. Federated learning environments are essential in training the next generation of data-driven explorers of the Earth system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enabling High Resolution Air Quality Forecasts using Advanced Machine Learning Algorithms for Improved Decisions through SERVIR Capacity Building Activities in Southeast Asia

Authors: Ashutosh Limaye, Alqamah Sayeed, Daniel Irwin, Peeranan Towashiraporn, Aekkapol
Affiliations: Nasa
SERVIR strives to build the capacity of partners to use satellite data to address critical challenges in food security, water security, weather and climate resilience, ecosystem and carbon management, and air quality and health. A partnership of NASA, USAID, and leading technical organizations in Asia, Africa, and Latin America, SERVIR develops innovative solutions to specifically address the user needs. Several efforts across the NASA Earth Action portfolio, and beyond, successfully bridge the gap between science and decision making. SERVIR has a unique way of linking the latest and most appropriate science to decision-making of users and regional technical and scientific collaborators through services that build sustained capacity in SERVIR regions. In this presentation, we will focus on an example - SERVIR’s air quality explorer tool in Southeast Asia. The web-based tool applies data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), Visible Infrared Imaging Radiometer Suite (VIIRS), and Goddard Earth Observing System (GEOS) Air Quality Forecasts to track and predict air quality in the Southeast Asian region, which includes Vietnam, Cambodia, Thailand, Myanmar, and Laos PDR. Starting in Thailand, SERVIR worked with partners at the Thailand Pollution Control Department (PCD) to co-develop actionable air quality forecasts for that are grounded using the in-situ observations collected by the Thailand Pollution Control Department. From the beginning, the purpose of this effort was to enable the users, such as Thailand Pollution Control Department, become proficient in the generation of the air quality forecasts on their own. Effective use of the ground observations, satellite data, and forecasts in a computationally efficient manner was a need expressed by the users. Use of neural network machine learning and deep learning algorithms, enabled us to stitch together a system that brings the different elements together in a computationally efficient manner to estimate air quality forecasts. Now the modeling system has found a firm user base, and has been used to develop specific, targeted application systems based on the air quality forecasts. For example, in northern Thailand, the forecast data was used to provide guidance on agricultural burns, to enable continuation of traditional agricultural practices while ensuring the resulting smoke impacts will not further exascerbate the air quality for downwind residents. In this presentation, we will discuss our experience with collaborating with the users to strengthen user capacity in development of a system that fits the needs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Breaking down time-series analyses, UAV, and hyperspectral data for schools

Authors: Johannes Keller, Christian Plass, Dr. Maike Petersen, Prof. Dr. Alexander Siegmund
Affiliations: Institute for Geography & Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education, Institute for Geography & Geocommunication – Research Group for Earth Observation (rgeo), Heidelberg University of Education and Heidelberg Center for the Environment (HCE) & Institute of Geography, Heidelberg University
Modern approaches to Earth Observation (EO), including time-series analyses, UAV, and hyperspectral data, hold significant potential for enhancing our understanding of the Earth’s system in the context of the Sustainable Development Goals (SDGs). Time-series analyses are instrumental in assessing the impact of climate change on the environment (Winkler et al., 2021), while drone data aids farmers in adopting resource-efficient cultivation methods with precision farming (Harsh et al., 2021). Furthermore, hyperspectral data from new satellites like EnMAP can assist in identifying minerals necessary for the sustainable energy transition (Asadzadeh et al., 2024). These applications create numerous educational opportunities by linking Geography, STEAM education (Science, Technology, Engineering, Arts, Mathematics), and education for sustainable development (ESD). However, the implementation of these EO applications in education is often hindered by time constraints, a lack of expertise among teachers, and the absence of suitable teaching examples (Dannwolf et al., 2020). A key solution to address these limitations is the development of user-friendly web applications for analysing EO data. For instance, there is a pressing need for a web-based tool that enables students to utilize the over 200 bands of EnMAP for analysis without causing confusion or overwhelm. Additionally, these applications should provide easily accessible EO data, include clear explanations for both teachers and students, and be integrated into ready-to-use educational materials. E-learning plays a crucial role in this context, as it alleviates the burden on teachers and facilitates personalized learning for students (Dannwolf et al., 2020). The project "EOscale3" at the Institute for Geography and Geocommunication - rgeo at Heidelberg University of Education aims to integrate satellite image time series, UAV, and EnMAP into educational settings. To achieve this, the user-friendly EO analysis web application BLIF has been expanded to offer a broader range of EO data and innovative tools for analysis. Throughout this process, various challenges have been encountered in integrating different data sources into a cohesive web application and providing suitable analytical tools for students. Additionally, adaptive e-learning modules and a virtual classroom have been developed, where students learn to apply EO data to solve real-world problems using the newly created application. The aim of this presentation is to demonstrate how time-series analyses, UAV, and hyperspectral data can be effectively integrated into classrooms. It will detail the development of appropriate tools, and the challenges addressed throughout this process. Finally, the presentation will showcase the e-learning modules, and the virtual classroom we will create within the project, designed to assist educators in effectively incorporating these new tools. References Asadzadeh, S., Koellner, N. & Chabrillat, S. (2024). Detecting rare earth elements using EnMAP hyperspectral satellite data: a case study from Mountain Pass, California. Scientific reports, 14(1), 20766. https://doi.org/10.1038/s41598-024-71395-2 Dannwolf, L., Matusch, T., Keller, J., Redlich, R. & Siegmund, A. (2020). Bringing Earth Observation to Classrooms—The Importance of Out-of-School Learning Places and E-Learning. Remote Sensing, 12(19), 3117. https://doi.org/10.3390/rs12193117 Harsh, S., Singh, D. & Pathak, S. (2021). Efficient and Cost-effective Drone – NDVI system for Precision Farming. International Journal of New Practices in Management and Engineering, 10(04), 14–19. https://doi.org/10.17762/ijnpme.v10i04.126 Winkler, K., Fuchs, R., Rounsevell, M. & Herold, M. (2021). Global land use changes are four times greater than previously estimated. Nature communications, 12(1), 2501. https://doi.org/10.1038/s41467-021-22702-2
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Closing the Geospatial Data Literacy Gap in Digital Farming: Lessons Learned

Authors: Dr. Julia Wagemann, Sabrina H. Szeto, Julian Blau
Affiliations: thriveGEO GmbH, BASF Digital Farming GmbH
Agriculture is one of the key industries where earth observation (EO) data can bring valuable insights. One area in which we see growing use of geospatial and EO data is in digital farming, which makes it easier for farmers to plan their operations and make data-driven decisions. At the same time, a shortage of workers with data literacy skills has led to challenges with hiring and the need for upskilling in many businesses that use EO data (European Data Market Study 2021-2023 and EARSC Industry Survey 2024). How can we address this challenge? This presentation showcases lessons learned from a training collaboration between BASF Digital Farming, a digital farming business, and thriveGEO, an EdTech startup that develops solutions to close the geospatial and Earth observation data literacy skills gap. In particular, we will provide real-world insights into the types of skills needed for developing EO-integrated products, what a scalable solution for upskilling can look like and how organisations can measure the return on investment in geospatial data literacy training.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Fostering Earth Observation Literacy: Lessons from SERVIR’s Curriculum Development Initiative

Authors: Kelsey Herndon, Micky Maganini, Dr. Tom Loran, Dr. Rob Griffin, Eric Anderson, Dr. Freek van der Meer, Dan Irwin, Dr. Roshanak Darvishzadeh, Claudia Paris, Roelof Rietbroek, Dr. Margarita Huesca Martinez, Michael Schlund
Affiliations: The University Of Alabama In Huntsville, NASA Marshall Space Flight Center, University of Twente
Open-access Earth Observation (EO) data represent a transformative resource for decision-making across a wide range of sectors, including NGOs, environmental organizations, and government agencies. Initiatives such as NASA's and ESA's Open Science programs have dramatically reduced barriers to access these data, creating opportunities to address complex challenges in areas like climate resilience, natural resource management, and disaster mitigation. Despite this progress, significant obstacles remain for non-experts in operationalizing EO data to inform actionable decision-making. These challenges, including limited technical expertise, insufficient tools, and a lack of tailored training, often prevent users from fully leveraging the rich temporal depth, global geographic coverage, and diverse environmental insights offered by remote sensing datasets. SERVIR, a joint initiative between NASA and the United States Agency for International Development (USAID), is designed to bridge these gaps by partnering with leading geospatial institutions worldwide to support the integration of EO data into decision-making processes. SERVIR employs a holistic approach that extends beyond traditional training and workshops, combining cutting-edge technology, local capacity building, and strategic partnerships to enhance EO literacy. One such partnership is with the University of Twente’s Faculty of Geo-Information Science and Earth Observation (ITC), a leading educational institution aimed at building technical expertise to use EO data to improve environmental decision making. Since 2018, SERVIR and ITC have worked together to integrate operational tools and services developed by SERVIR into ITC’s graduate curriculum. In addition, SERVIR has developed asynchronous, virtual modules as part of ITC’s Geoversity platform – an educational platform targeted at professionals to increase their operational capacity to use EO data and tools in their workflows. Key SERVIR tools integrated into ITC’s curriculum include ClimateSERV, a tool for accessing climate data and visualizations to analyze trends and inform climate adaptation strategies; Collect Earth Online (CEO), a collaborative tool for land-use monitoring and environmental assessments; HYDRAFloods, a hydrological modeling tool supporting flood risk assessment and water resource management; and the Radar, Mining, and Monitoring Tool (RAMI), which tracks mining activities and their environmental impacts. Each tool was selected based on its potential to address specific regional and sectoral challenges, enabling ITC students and professionals to tackle real-world issues using EO data. To ensure effective integration of these tools, SERVIR employs a structured Curriculum Development Initiative Framework comprising six key phases: Assessment, Outreach, Development, Review, Implementation, and Evaluation. This phased approach ensures that the curriculum is both relevant and impactful, fostering sustainable capacity-building. In addition to in-person courses, SERVIR has developed asynchronous, virtual modules for ClimateSERV, CEO, and HYDRAFloods as part of ITC’s Geoversity platform. This platform provides professionals with flexible learning opportunities, enabling them to incorporate EO data into their workflows regardless of geographic or time constraints. These modules are designed with practical applications in mind, emphasizing hands-on exercises and real-world case studies. Initial outcomes from this initiative are promising with participant surveys indicating increased confidence in applying EO tools to their areas of interest. Furthermore, the initiative has highlighted the importance of tailoring content to diverse user groups, recognizing that the needs of professionals in government agencies may differ significantly from those in NGOs or academia. Key lessons learned include the value of blending technical instruction with contextual examples to bridge the gap between theory and application, the importance of iterative feedback from participants to refine training materials, and the need for ongoing support and mentorship to ensure sustained use of EO tools after initial exposure. SERVIR’s experience underscores the critical role of strategic partnerships and innovative educational models in advancing EO literacy. By integrating EO tools into both traditional academic programs and professional training platforms, SERVIR is fostering a new generation of decision-makers equipped to harness the full potential of EO data. This approach not only addresses immediate capacity gaps but also lays the groundwork for sustained, scalable impact across diverse sectors and geographies. In this presentation, we will delve deeper into the Curriculum Development Initiative Framework, share detailed outcomes from participant evaluations, and discuss the broader implications of our work for cultivating EO literacy globally. By sharing these insights, we aim to inspire other organizations to adopt similar approaches, contributing to a more EO-literate workforce capable of addressing today’s pressing environmental and societal challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The ESA Stakeholder Engagement Facility

Authors: Phillip Harwood, Michelle Hermes, Diego Carcedo, Francesca Elisa Leonelli
Affiliations: Evenflow, EARSC, SERCO, ESA-ESRIN
The European Space Agency, ESA, funds a wide range of user-focused application projects across a wide range of topics, including food security, ecosystem monitoring and SDG indicators. Each of these projects typically works together with a set of early adopters to ensure that the project’s outputs are tailored to real user needs, and respond to real policy requirements. However ESA’s experience has shown there to be several areas where stakeholder interaction could be improved: Engaging with stakeholders in the context of a single project misses opportunities to interest them in a wider range of multi-project solutions that could meet their needs. Many projects produce outputs which are of interest to a wider group of stakeholders than those originally involved in the project. When projects end, interactions with their stakeholders is not systematically maintained. Stakeholders often find that services they are interested in are not maintained, or that support is no longer available. To help address these issues ESA has created the Stakeholder Engagement Facility, SEF, with the aim of maintaining and expanding engagement with a diverse range of user communities. The SEF is run under contract from ESA by a consortium consisting of Evenflow (Belgium), SERCO (Italy and Czechia) and EARSC (Belgium). The SEF initially focuses on four priority themes: Food Systems; Ecosystems and Biodiversity; Carbon, Energy and the Green Transition; and Sustainable Development Goals. The SEF was kicked off in November 2023, and started full operations in May 2024. Since then it has performed a series of user focussed activities, aiming not to duplicate existing ESA efforts but instead to bring in new stakeholders. Rather than following a predefined set of activities, the aim is to work with the community to identify their needs and blocking points, then to define an action plan to address these. Each community has different needs: in some cases the blocking point is simply lack of awareness of the tools available, for others there may be a need for training and capacity building, while in other cases the tools are understood but there is a need to establish trust in the outputs. In many cases the blocking points are not technical, but can relate to management or legal issues. The SEF adapts its activities according to the state of the community, and over the last year has performed activities including - Providing presentations and demonstration desks at events of the target communities, working on the principle of going to the users rather than expecting the users to come to the EO community. - Organising a series of webinars presenting EO based services to stakeholders in the fields of ecosystem conservation and city management. - Providing bespoke training to users to get them started in the use of EO based tools. - Undertaking other tasks identified by the community as being needed by them, such as compiling inventories of EO based services. The SEF works closely together with another project (APEx: Application Propagation Environment), which ensures the continued availability of the data, tools and services developed by different projects. Taken together, the two projects ensure that users of ESA funded projects should no longer experience a sharp transition at the end of a project lifetime, allowing time to build up the case for operational use of the services developed. In addition, ESA also has identified a need for an improved mapping of policies and stakeholders, to help ensure that ESA’s funds are effectively directed towards meeting key policy needs. The SEF provides a tool where the relations between key policies and stakeholders are mapped, also including thematic areas and project outputs, which allows an improved understanding of the overall landscape that EO based tools are addressing. Over time this should allow the SEF to better target its activities towards those stakeholders that are most relevant for the desired policy outcomes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: D.01.04 - POSTER - Using Earth Observation to develop Digital Twin Components for the Earth System

Climate change represents one of the most urgent challenges facing society. The impacts of climate change on the Earth system and society, including rising sea levels, increasing ocean acidification, more frequent and intense extreme events such as floods, heat waves and droughts, are expected not only to have a significant impact across different economic sectors and natural ecosystems, but also to endanger human lives and property, especially for most vulnerable populations.

The latest advances in Earth Observation science and R&D activities are opening the door to a new generation of EO data products, novel applications and scientific breakthroughs, which can offer an advanced and holistic view of the Earth system, its processes, and its interactions with human activities and ecosystems. In particular, those EO developments together with new advances in sectorial modelling, computing capabilities, Artificial Intelligence (AI) and digital technologies offer excellent building blocks to realise EO-based Digital Twin Components (EO DTCs) of the Earth system. These digital twins shall offer high-precision digital replicas of Earth system components, boosting our capacity to understand the past and monitor the present state of the planet, assess changes, and simulate the potential evolution under different (what-if) scenarios at scales compatible with decision making.

This session will feature the latest developments from ESA’s EO-based DTCs, highlighting:
- Development of advance EO products
- Integration of EO products from a range of sensors
- Innovative use of AI and ML
- Advanced data assimilation
- Development of tools to address needs of users and stakeholders.
- Design of system architecture
- Creation of data analysis and visualization tools
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A digital twin of Svalbard’s cryosphere (SvalbardDT)

Authors: William Harcourt, Georgios Leontidis, Dr Eirik Malnes, Dr Robert Ricker, Ward Van Pelt, Veijo Pohjola, Adrian Luckman, Noel Gourmelen, Livia Jakob, Ashley Morris, Morag
Affiliations: University Of Aberdeen, NORCE Norwegian Research Centre, Uppsala University, Swansea university, University of Edinburgh, EarthWave, Svalbard Integrated Arctic Earth Observing System (SIOS)
The Svalbard archipelago, which sits at the boundary between the warm midlatitudes and cold Polar region, is warming six times faster than the global average. This is driving mass loss from glaciers, diminishing sea ice extent, and reducing seasonal snow cover, significantly altering the interconnected systems within Svalbard’s cryosphere. Furthermore, Svalbard is considered a super site of in situ observations in a pan-Arctic context owing to the permanent infrastructure and long history of international scientific collaboration on the archipelago. Combined with the recent exponential increase in satellite data, now is the time to exploit this dense set of observations to build improved digital representations of the Arctic cryosphere. In this contribution, we will update on the development of a Digital Twin Component (DTC) of Svalbard’s cryosphere (SvalbardDT) as part of the Destination Earth (DestinE) initiative. The development of a DTC that can map the current state of the cryosphere and analyse the physical processes interconnecting the different sub-systems has profound implications for marine and terrestrial decision-making as well as our understanding of the fundamental physical processes that govern Svalbard’s cryosphere. The DTC will be optimised using Svalbard’s extensive observational record which will enable us to undertake a thorough validation of the DTC outputs. We will construct a new DTC of the ice and snow of Svalbard’s cryosphere in the 21st Century through an automated data management system to ingest, harmonise, and analyse data products ready for delivery to the Digital Twin Earth Service Platform (DESP). Earth Observation (EO) data products describing glacier dynamics, snow cover, and sea ice variability combined with atmospheric reanalysis data will be ingested into our DTC. These data products are multi-modal i.e. they are collected at different resolutions, scales and time/spatial periods. Therefore, the DTC will utilise a deep learning approach to ingest the relevant data products and harmonise them into a 4D data cube describing the data set variable, x-dimension, y-dimension, and its changes over time. Our aim is to generate weekly data cubes describing 16 parameters. With spatially modelled data cubes, we will next initiate a feedback loop to train the DTC using multi-modal learning. After building the DTC infrastructure and AI models, we will focus on the application of two case studies. Firstly, we will study the impacts of extreme weather events on Svalbard’s cryosphere, such as Rain on Snow and Ice, by analysing ‘emergent behaviour’ from the AI models that may elucidate new understanding of these physical processes. The second use case will focus on developing the DTC as a tool for optimising marine and terrestrial navigation across Svalbard and associated waters in response to changing snow and ice conditions. This will involve engagement with local stakeholders. We will present the architectural design of our DTC, the data products used and the first results of the AI models used to harmonise these multi-modal data sets.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The IRIDE Cyber Italy project: an enabling PaaS for Digital Twin Applications

{tag_str}

Authors: Stefano Scancella, Fabio Lo Zito, Stefano Marra, Davide Foschi, Fabio Govoni, Simone Mantovani
Affiliations: Serco Italia S.p.A., CGI Italia S.r.l., MEEO S.r.l.
The IRIDE Cyber Italy project represents a significant national step forward to develop and implement Digital Twins (DT) of the Earth, by leveraging Earth Observation data and cloud technologies and services to build a scalable and interoperable reference Framework enabling the use of DTs in diverse thematic domains. As part of the Italian EO space program funded by the European Union's National Recovery and Resilience Plan (PNRR) and managed by the European Space Agency (ESA) in collaboration with the Italian Space Agency (ASI), the project demonstrates Italy’s commitment to advancing EO applications and fostering digital innovation. The Cyber Italy Framework aims to provide an enabling Platform as a Service (PaaS) solution to exploit the Digital Twins capabilities with practical applications in field such as risk management, environmental monitoring, and urban planning. A Digital Twin, as a digital replica of Earth, integrates data-driven models to simulate natural and human processes, thereby allowing advanced analyses, predictive capabilities, and insights into the interactions between Earth's systems and human activities. SERCO is the company leader of the consortium composed by e-GEOS, CGI, and MEEO. Phase 1 of the project, completed in 2024 and lasted 12 months, focused on prototyping a hydro-meteorological Digital Twin, showcasing the powerfulness of a DT framework and its application to flood simulation and management. Phase 2, on-going, lasting additional 12 months, evolves the framework prototype into a pre-operational system, by: • enhancing the Framework’s scalability, elasticity and interoperability; • setting up a DevOps environment over a cloud-based infrastructure; • demonstrating the usability of the Framework by integrating an additional DT (Air quality Digital Twin), developed by a third-party. The final Phase 3, lasting 10 months and ending in 2026, will focus on the full operationalization of the Framework as a platform for the integration of any additional DTs, to expand thematic coverage. The project adopts a cloud-native, container-based architecture, leveraging the continuous integration, delivery and deployment (CI/CD) approach to ensure efficient updates and system adaptability. The infrastructure, based on OVHcloud technologies, is designed to support both horizontal and vertical scalability and elasticity, allowing it to handle increasing data volumes and concurrent user sessions seamlessly by a Kubernetes-based orchestration. The Digital Twin framework is powered by Insula, the CGI Earth Observation (EO) Platform-as-a-Service, which has been successfully implemented in various ESA projects, including DestinE DESP. Insula provides a comprehensive suite of APIs designed to support hosted Digital Twins (DTs) with functionalities such as data discovery, data access, processing orchestration, and data publishing. Beyond these foundational capabilities, Insula also enables the seamless integration of custom processors, allowing users to extend the platform's analytical capabilities to meet specific project requirements. Complementing its robust APIs, Insula offers an advanced user interface tailored for complex big data analytics. This UI leverages a scalable and cloud-native backend, empowering users to perform intricate analyses efficiently and at scale, thus making Insula a key technology for operationalizing Digital Twin frameworks. Interoperability is a key concept of Cyber Italy Framework, facilitated by the integration in the Framework of the ADAM platform developed by MEEO, which adopts both Harmonised Data Access (HDA) and Virtual Data Cube (VDC) approaches, ensuring consistent and fully customizable handling of input data, supporting the integration of distributed data sources and diverse DTs while enhancing long-term flexibility. ADAM is largely adopted as key technology within relevant European Commission initiatives (WEkEO, DestinE Service Core Platform, …) and ESA projects (ASCEND, HIGHWAY, GDA APP, …) to generate and deliver Analysis Ready Cloud Optimised (ARCO) products to support multi-domain and temporal analyses. One of the key features of the CyberItaly Framework is the ability to define and implement "what-if" scenarios, which provide stakeholders with critical tools to simulate conditions, predict outcomes, and make data-driven decisions. These scenarios are instrumental in addressing challenges like hydro-meteorological events, offering precise predictions for flood risks or air quality previsions, such as emissions or traffic pollution estimation, enabling more effective planning and response strategies. The IRIDE Cyber Italy project goal is to create a robust and versatile digital ecosystem, integrating cutting-edge EO technologies and seeks to demonstrate the potential of Digital Twins in supporting a sustainable Earth system and environmental management. By leveraging cloud-native architectures, and emphasizing standardization and scalability, the IRIDE Cyber Italy project is creating a versatile platform for DTs. This project represents a crucial step towards by creating a comprehensive framework capable of supporting a wide range of Digital Twins. Future applications could extend the use of Digital Twins on a wide range of sectors, such as urban planning, agriculture, and natural resource management, contributing to the global vision of using EO technologies to advance Earth system understanding and management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sentinel-3 OLCI observation-based digital twin component for aquatic carbon in the land-sea continuum

Authors: Martin Hieronymi
Affiliations: Helmholtz-Zentrum Hereon
Water constituents exhibit diverse optical properties across ocean, coastal, and inland waters, which alter their remote-sensing reflectance obtained via satellites. Optical water type (OWT) classifications utilized in satellite data processing aim to mitigate optical complexity by identifying fitting "ocean" color algorithms tailored to each water type. This facilitates comprehension of biogeochemical cycles ranging from local to global scales. We present a novel neural network- and OWT-based processing chain for Sentinel-3 OLCI data of the aquatic environment. Using a data set of daily-aggregated Sentinel-3A & 3B OLCI data of the entire North Sea and Baltic Sea region with adjacent land areas for the time span June to September 2023, we introduce the retrieved optical properties and their relationships with concentrations of the water constituents, for example on dissolved and particulate organic carbon. Moreover, we show the great potential of a novel OWT analysis tool for the differentiation of phytoplankton diversity, understanding aquatic carbon dynamics, and assessing the uncertainties of satellite products. The OWT analysis can directly be used to draw conclusions about the trophic state of lakes (albeit based on colour and not on a concentration range) or potentially harmful algal blooms, e.g. intense cyanobacteria blooms in the Baltic Sea. This would provide a handle for possible warnings for bathing waters, drinking water treatment or the fishing and aquaculture industry. Together with additional information, e.g. on the water depth, the OWT analysis also serves to develop meaningful new flags for areas where the model assumptions and algorithms are not valid. In the future, it will also be important to demonstrate the performance of satellite retrievals through water type-specific validation. The presented data set with OWT analysis can serve as a blueprint for a holistic view of the aquatic environment and some pools of carbon and is a step towards an observation-based digital twin.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Forest Digital Twin – From TLS data to 3D tree representation for Radiative Transfer Modelling

Authors: Tomáš Hanousek, Jan Novotný, Barbora Navrátilová, Růžena Janoutová
Affiliations: Global Change Research Institute CAS, Department of Geography, Faculty of Science, Masaryk University
The concept of a forest digital twin represents a transformative approach to understanding and managing forest ecosystems by bridging the gap between field data and predictive models. Terrestrial Laser Scanning (TLS) provides high-resolution, three-dimensional (3D) data of forest structure, enabling detailed and precise reconstruction of individual trees. By converting this data into digital 3D tree representations, it enables simulating ecosystem processes using Radiative Transfer Models (RTMs) within various parts of the forest and calibrate the influence of tree structure on outputs. These simulations play an important role in studying forest metrics, RTM, and energy exchange within forests. The key challenge is to develop a streamlined workflow from TLS data acquisition to the production of accurate and usable 3D tree representations for RTM applications. We are introducing a comprehensive workflow for reconstruction of 3D tree representations from TLS data based on already tested practices combined with new approaches. The workflow was tested on 9 plots from the Těšínské Beskydy region, Czech Republic where TLS data were acquired using Riegl VZ-400 scanner and employs following steps: • From the acquired TLS data, we estimated the Leaf Area Index (LAI) value using the VoxLAD model and extracted its value for individual trees • Separation of individual trees, we used 3D graph-based method to isolate individual trees with optimisation to Central European forests • Foliage and wood segmentation, we labelled TLS data using semantic segmentation and deep learning algorithm • To build the tree branch structure, we used Treegraph for deciduous trees and our own algorithm for coniferous trees • Spatial distribution of leaves was done using our own algorithm with usage of LAI value to distribute the appropriate number of leaves Reconstructed 3D tree representations are ready for use as scalable RTM models, such as the Discrete Anisotropic Radiative Transfer (DART) model. The workflow allows users to define their own LAI values and select different leaf models to simulate different conditions. In addition, the leaf models provided in workflow are optimised to balance accuracy and computational efficiency, ensuring that the models can be used in computationally demanding scenarios without compromising the reliability of the simulation results. However, they can be replaced with more detailed leaf models to further improve accuracy. We present a comprehensive workflow for reconstructing 3D tree representations from TLS data as a method for creating a forest digital twin. By integrating advanced segmentation, structural modelling, and leaf spatial distribution techniques, our proposed workflow uses a combination of innovative methods to accurately reconstruct individual tree in forest and develop Digital Twin of Forest for RTM models. This approach represents a significant advance in ecological modelling and forest management, providing a reliable basis for studying canopy dynamics and forest metrics in complex forest systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A DTC Urban - SURE Smart Urban Resilience Enhancement.

Authors: Jan Geletič, Manuela Ferri, Aniello De Luca, GABRIELE MURCHIO, Alessia Tricomi, Daniele Oxoli, Prof Maria Antonia Brovelli, Patrick Matgen, Marco Chini
Affiliations: E-geos, Politecnico di Milano, LIST-Lyxembourg Institute of Science and Technology
SURE has been proposed in the framework of ESA-DTE-B-02 EARLY DTCS DEVELOPMENT ACTIONS. It is managed by e-GEOS and a significant Consortium including Politecnico di Milano, Luxembourg Institute of Science and Technology and Stefano Boeri Architetti. The project focuses on two use cases: • Modelling the effects of Urban Heat Islands (UHIs) in a test area of Milan, addressing both city scale and neighbourhood-scale impacts. • Simulating flood effects in a suburban area of Luxembourg (Urban Floods). Concerning the UHI use case, city scale simulations aim at demonstrating the capability to provide insights into macro-changes in Land Surface Temperature (LST) triggered by extensive interventions (e.g., green area expansions, urban forestation). LCZ (Local Climate Zone) and UHI maps will be generated based on Sentinel-2 with different Urban Canopy Parameters and MODIS/Landsat-8/9 thermal data, respectively. A simulation workflow will be set up based on urban vegetation/built-up characteristics to replicate the LST response. Instead, the neighbourhood scale will be addressed through urban microclimate modelling by leveraging the PALM model system (https://palm.muk.uni-hannover.de). The Urban Flood Use Case focuses on creating a flood risk assessment framework using EO data, AI algorithms (e.g., Sentinel-1 SAR), and climate models to predict flood hazards in Luxembourg’s Alzette and Sure river floodplains. It integrates precipitation, temperature, soil moisture, and river discharge data to produce flood hazard and risk maps. An ensemble approach addresses uncertainties, with results displayed on a dashboard to evaluate impacts on infrastructure. Both Use Cases will be articulated in a set of “what if” scenarios, according to the User’s requirements collected during the first steps of the project. For each scenario a dedicated Digital Twin Component environment will be realized, in order to give to the final users a well-organized and complete framework in which the results of the modelling may be visualized and combined to get further information. The project addresses the needs of different users’ categories, such as private entities (banks, insurances, professionals, urban planners) and public administrations involved in the management of the territory. As stakeholders already involved in the project, it is possible to mention for the UHI Use Case: - Città Metropolitana of Milan; - Studio Boeri Architetti (partner and stakeholder); for the Flood Use Case - Spuerkees Bank On the UHI Use Case we registered the active interest of the Municipality of Milan. The Consortium is getting in touch with Luxembourg Public Administrations involved in water management and civil protection, in order to widen the stakeholder’s participation. SURE will do wide use of satellite datasets, jointly with all other useful available dataset, in all the phases of the project, starting from the training and validation of the models to the realization of the DT environment. What realized during the project will be compliant to the specifications of DestinE Core Service Platform (DESP) for future integration.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Digital Twin Earth for Climate Change Adaptation: Downscaling for Human Activities

Authors: Prof. Mihai Datcu, Research Assoc. Prof. Vaduva Corina, Dr. Ing. Alina Radutu, Prof. Eden Mamut, Prof. Ionescu Constantin, Prof. Liviu
Affiliations: POLITEHNICA Bucharest
The presentation is introducing the results of the recently established national Competence Center for Climate Change Adaptation in Romania. Climate models describe changes at scales of thousands of kilometers and for periods of centuries. However, adaptation measures shall be applied at human activities scales, from few meters to kilometer and for periods of days to months. It is in the scope of the project to contribute to the EC Adaptation to Climate Change Agenda with specific measures of implementation using coupled models across domains and spatiotemporal scales. The Competence Centre will promote in its agenda the opportunities offered by the availability of Big EO Data, with a broad variety of sensing modalities, global coverage, and more than 40 years of observations. The project is in line with the „Destination Earth'' initiative (DestinE) that promotes the use of Digital Twins (DT) and with the EC-ESA Joint Initiative on Space for Climate Action developing new AI4EO paradigms. The Compentence Center activities are supported by an actionable digital media, a system of federated DTs. The DTs implement a virtual, dynamic models of the world, continuously updated, enabling simulations while providing more specific, localized and interactive information on climate change and how to deal with its impacts. These DTs will also be a tool to largely interact with local administrations, and also directly with people, raising awareness and amplifying the use of existing climate and EO data and knowledge services, for the elaboration of local and specific adaptation. That is a step towards a citizen driven approach with an increased societal focus. The Competence Center is currently implementing 5 DTs in 5 synergetic projects: “Artificial Intelligence in Earth Observation for Understanding and Predicting Climate Change” (AI4DTE), “Active Measures for Restoring Sweet-Water Lakes and Coastal Areas affected by Eutrophication addressing the Enhancement of Resilience to Climate Change and Biodiversity” (Act4D-Eutrophication), “Exploitation of Satellite Earth Observation data for Natural Capital Accounting and Biodiversity Management” (EO4NATURE), “The Research centEr for climAteChange due to naTuraldIsasters and extreme weather eVEnts” (REACTIVE), “Assessing climate change impact on the vector-borne diseases in the One-Health context” (VeBDisease). This is a federated DT systems. A first DT Earth (DTE) will maximize the information extracted from EO data. The methodology is focused on hybrid Physics Informed AI methods, time series, prediction, causality discovery, a “what-if” engine to monitor, forecast or simulate climate change effects. A second DT will model the eutrophication of sweet-water lakes and Black Sea West coast waters. A third DT produce adaptation knowledge as requested for actions aiming to protect plants biodiversity and ecosystems. A fourth DT will monitor coupled atmosphere-hydrosphere-lithosphere processes, it will provide for the first time an integrated view of how climate-change-stimulated phenomena can be monitored using seismic sensor networks. And a fifth DT will model and anticipate and fight vector-borne emerging animal and zoonotic infectious diseases, implementing an integrated One-Health approach considering the links between human health, animal health, and environmental health. The use case to be presented is covering the region of Dobrogea encompassing the region of the Black Sea coast from Suitghiol lake to the Danube Delta and Babadag forest, a very diverse region where the DTs complementarity is demonstrated. The initial data used comprises multi annual Satellite Image Time Series from Sentinel-1 and Sentinel-2, GEDI measurements, infrasound and seismic continues records, GNSS data, in-situ water quality measurements, biodiversity in-situ parameters, in-situ information on mosquito species and pathogens, wind maps, and meteorological data. The analysis is made by the fusion and the joint analysis of the spatio-temporal patterns of environmental parameters estimated from the collected data and predictions based on DNN models: water bio-, chemical- parameters, canopy height, spectral indices, land cover classes, wind speed prediction al low altitude, sea currents and wind speed, or extreme weather effects detection and characterisation. The coupled DTs system will support the scope of the Competence Center to promote a geographical diversity approach, involving various regions and communities, following a systemic approach converging several cross-modalities themes and areas of innovation, implemented as an inclusive methodology to bring together public administrations, private sector, civil society, and finally the citizens in person.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Fields of The World and fiboa: Towards interoperable worldwide agricultural field boundaries through standardization and machine-learning

{tag_str}

Authors: Matthias Mohr, Michelle Roby, Ivor Bosloper, Hannah Kerner, Prof. Dr. Nathan Jacobs, Caleb Robinson
Affiliations: Taylor Geospatial Engine, Arizona State University, Washington University in St. Louis, Microsoft
In this talk, we present two closely related initiatives that aim to facilitate datasets for worldwide standardized agricultural field boundaries: the fiboa data specification and the Fields of The World (FTW) benchmark dataset and models. Both initiatives work in the open and all data and tools are released under open licenses. Fiboa and FTW emerged from the Taylor Geospatial Engine’s Innovation Bridge Program’s Field Boundary Initiative [1]. This initiative seeks to enable practical applications of artificial intelligence and computer vision for Earth observation imagery, aiming to improve our understanding of global food security. By fostering collaboration among academia, industry, NGOs, and governmental organizations, fiboa and FTW strive to create shared global field boundary datasets that contribute to a more sustainable and equitable agricultural sector. Field Boundaries for Agriculture (fiboa) [2] is an initiative aimed at standardizing and enhancing the interoperability of agricultural field boundary data on a global scale. By providing a unified data schema, fiboa facilitates the seamless exchange and integration of field boundary information across various platforms and stakeholders. At its core, fiboa offers an openly developed specification for representing field boundary data using GeoJSON and GeoParquet formats. This specification has the flexibility to incorporate optional 'extensions' that specify additional attributes. This design allows for the inclusion of diverse and detailed information pertinent to specific use cases. In addition, fiboa encompasses a comprehensive ecosystem that includes tools for data conversion and validation, tutorials, and a community-driven approach to developing extensions. This allows a community around a specific subject to standardize datasets. By using datasets with the same extensions, the tools can validate attribute names, coding lists, and other conventions. The fiboa initiative goes beyond providing specifications and tooling by developing over 40 converters for both open and commercial datasets [3]. These converters enable interoperability between diverse data sources by transforming them into the fiboa format. This significant effort ensures that users can integrate and utilize data more efficiently across different systems and platforms. All open datasets processed through this initiative are made freely accessible via Source Cooperative [4], an open data distribution platform. Fields of The World (FTW) [5] is a comprehensive benchmark dataset designed to advance machine learning models for segmenting agricultural field boundaries. Spanning 24 countries across Europe, Africa, Asia, and South America, FTW offers 70,462 samples, each comprising instance and semantic segmentation masks paired with multi-date, multi-spectral Sentinel-2 satellite images. Its extensive coverage and diversity make it a valuable resource for developing and evaluating machine learning algorithms in agricultural monitoring and assessment. FTW also provides a pretrained machine learning model for performing field boundary segmentation. This model is trained on the diverse FTW dataset, enabling it to generalize effectively across different geographic regions, crop types, and environmental conditions. Additionally, ftw-tools - a set of open-source tools accompanying the benchmark - simplifies working with the FTW dataset by providing functions for download, model training, inference, and other experimental or explorative tasks. Fiboa (Field Boundaries for Agriculture) and Fields of The World (FTW) complement each other in advancing agricultural technology. fiboa provides a standardized schema for field boundary data. FTW, with its benchmark dataset and pretrained machine learning model, generates field boundary data from satellite imagery to fill global data gaps. FTW’s source polygons used to create the benchmark dataset and output ML-generated field boundaries are fiboa-compliant. Together, both projects form a powerful ecosystem: fiboa ensures data consistency and usability, while FTW supplies the tools and insights to produce and refine this data. This synergy supports precision farming, land use analysis, land management, and food security efforts, driving innovation and sustainability in agriculture worldwide. The vision is to develop a continuously evolving global field boundary dataset by combining the open field boundaries converted into the fiboa format with the output datasets generated by FTW. References: [1] https://tgengine.org [2] https://fiboa.org [3] https://fiboa.org/map [4] https://source.coop [5] https://fieldsofthe.world
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Introduction to the Early Digital Twin Component EO4ER ("Earth Observation for Energy Risks")

Authors: Ingo Schoolmann
Affiliations: Ohb Digital Services Gmbh
In line with the goals of ESA Digital Twin Earth Program, the project EO4ER (“Earth Observation for Energy Risks”) contributes to the Early Digital Twin Component (DTC) objectives, by the implementation of a DTC prototype for the energy sector. Solar energy being one of the leading renewable energy sources while being affected by the temperature increase caused by climate change, this project focuses on two use cases regarding solar energy systems resulting in an interactive digital reconstruction and simulation with respect to • anticipating the impact of variable power production by photovoltaic systems on the active operation of low voltage networks using hour-scale solar power production forecasting based on the novel EO capabilities given by the MTG satellite series, and, • operational risks for large PV plants due to extreme situations and the long-term solar potential assessment based on climate projection datasets and what-if-scenarios. The targeted main stakeholders are represented by individual prosumers as well as infrastructure, grid and distribution system operators. For the validation of the developed methods, a field test area in Germany (covering more than 80 PV systems) is considered. The scope of the EO4ER project is in line with the increased emphasis on renewable energy and zero emissions anticipated by the European Green Deal as well as national plans for clean energy transitions. The prime OHB Digital Services GmbH together with Reuniwatt SAS and Technische Hochschule Ulm (THU) form the EO4ER consortium.
Affiliation(s): Ohb Digital Services Gmbh
LPS Website link:
Introduction to the Early Digital Twin Component EO4ER ("Earth Observation for Energy Risks")&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mirroring natural and anthropogenic phenomena with CyberItaly

Authors: Enrico Cadau, Mr Pascal Gilles, Rita Chirico, Luigi Conte, Luciano Rempicci, Irene Luongo, Dario Bottazzi, Marco Pischedda, Luca Benenati, Antonio Monteleone, Ivan Federico, Rosalia Maglietta, Martina Montinaro, Giovanni Coppini
Affiliations: ESA-ESRIN, Capgemini, Nais, CMCC, ATG-Europe for ESA
The increasing pollution, environmental fragility, climate change, and associated extreme weather events like flash-floods and persistent droughts are putting significant strain on our communities. To build a more sustainable future, public common policies and solutions are needed [1]. The integration of Digital Twin (DT) technology, big data analytics, AI, and Earth observation systems offers a promising approach to address these challenges. By leveraging these technologies, we can analyse complex phenomena, assess the impacts of various actions, and provide policymakers and civil protection agencies with crucial insights for timely and informed decision-making. The notion of DT emerged in the early 2000s and it is gaining popularity in different branch of engineering [2]. In earth science domains the DT [3] is defined as an information system that exposes users to a digital replication of the state and temporal evolution of the Earth system constrained by available observations and the laws of physics. DTs can be viewed as a new class of simulations based on expert knowledge and data that are continuously gathered from the physical system to accurately represent the phenomena of interest in different scales of time and space [4]. DTs allow users to explore hypothetical simulation scenarios and engage in complex “what-if” analysis to improve our understanding, prediction and, consequently, to extend our ability to reduce natural/anthropogenic risks [5]. DTs require the availability of high-quality data to capture the complexity of the reality. Early DT projects like ESA's Destination Earth [5]. and NASA's Earth System DT [6] primarily relied on satellite EO data. The CyberItaly project is part of the IRIDE programme initiated under the framework of the Italy’s National Recovery and Resilience Plan (PNRR). It introduces an innovative approach to data collection that integrates traditional satellite-based observations with ground-level sensor data collected from multiple sources, including regional and municipal institutions. The proposed approach opens the possibility to create more comprehensive digital representations of complex systems. Available DT implementations typically involve the simulation of complex chemical-physical and (multi-) physics processes. These simulations require significant time and computational resources to complete. In addition, it is also necessary to adopt advanced data assimilation techniques to integrate heterogeneous observational data with physical system simulations and to generate comprehensive state estimations. CyberItaly follows a different approach and fosters the adoption of surrogate models based on machine learning techniques. ML models are trained using synthetic data generated through complex simulations and ensure the possibility to represent common system scenarios with acceptable fidelity. Running ML models requires limited time/resources, and ML-based simulations enable users can gain comprehensive insights into system complexity. This enables the exploration of different public policies to increase safety/sustainability of the environment. After thoroughly exploring various scenarios through the surrogate model, users can validate their findings using traditional high-fidelity simulations. In this paper we will present these ideas and will introduce case studies in the field of air quality management and costal protection. • Air Quality DT—This DT forecasts air pollution from traffic in metropolitan areas. It gathers data from various sources like traffic, road maps, weather, and elevation maps. The system creates an emission model for each street and uses a kernel based surrogate model to estimate pollutant diffusion. It also allows analysis of hypothetical scenarios to assess the impact of traffic restrictions policies, vehicles fleet evolution and buildings on pollutant dispersion. So far, we applied this DT to the urban area of Bologna and Genova, which can benefit of pollutant dispersion map computed at 20 meters resolution or better, for both past and forecast capabilities implemented. • Coastal Protection DT—The Coastal Protection DT forecasts erosion, flooding, sediment transport and water quality. The system leverages cutting-edge multi-resolution EO data and in situ observations and models from Copernicus Marine Service (CMS) and Emodnet. Advanced modeling approaches, including wave, ocean circulation, sediment transport and coastal inundation models, simulate complex environmental processes, while an AI-based emulator enables the rapid generation of "what-if" scenarios. These features empower stakeholders to evaluate the impacts of coastal restoration and Nature-Based Solutions (NBS), infrastructure modifications and climate adaptation measures in near-real time, ensuring timely and effective decision-making. The DT is currently applied to two pilot areas in Italy, the Rimini coastline and the Manfredonia zone. References [1] IPCC, “Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change,” H. Lee and J. Romero (eds.), IPCC Technical Report, Geneva, Switzerland, 2023. [2] M. Grieves and J. Vickers, “Digital twin: Mitigating unpredictable, undesirable emergent behaviour in complex systems,'” in Transdisciplinary Perspectives on Complex Systems: New Findings and Approaches, F.-J. Kahlen, S. Flumerfelt, and A. Alves, Eds., Springer Int., Aug. 2016, pp. 85–113. [3] P. Bauer, B. Stevens, W. Hazeleger, “A digital twin of Earth for the green transition,” Nature Climate Change, Vol. 11, Feb. 2021, pp. 80–83. [4] T. Gabor, L. Belzner, M. Kiermeier, M. T. Beck, and A. Neitz, “A simulation-based architecture for smart cyber-physical systems,” in Proc. IEEE International Conference on Autonomic Computing (ICAC), Wurzburg, Germany, Jul. 2016, IEEE Press, pp. 374–379. [5] Nativi, Stefano, Paolo Mazzetti, and Max Craglia. 2021. "Digital Ecosystems for Developing Digital Twins of the Earth: The Destination Earth Case" in Remote Sensing, MDPI, Vol. 13, No. 11, May 2021. [6] J. Le Moigne, “NASA'S Advanced Information Systems Technology (AIST): Combining New Observing Strategies and Analytics Frameworks to Build Earth System Digital Twins,” in proc. 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia July 2022, pp. 4724-4727.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards a Digital Twin for the Alps to simulate landslide occurrences for hazard adaptation strategies.

Authors: Jean-Philippe Malet, Clément Michoud, Thierry Oppikofer, Floriane Provost, Maxim Lamare, David Michéa, Aline Déprez, Michael Foumelis, Fabrizio Pacini, Philippe
Affiliations: Ecole et Observatoire des Sciences de la Terre / CNRS, Terranum Sàrl, Data-Terra / THEIA Continental Surfaces Data and Service Hub / CNRS, Sinergise Solutions Gmbh, School of Geology, AUTh / Aristotle University of Thessaloniki, Terradue, European Space Agency - ESA, Esrin
The Alps are the most densely populated mountain range in Europe particularly sensitive to the impacts of climate change and thus to hydro-meteorological hazards such as landslides, floods, droughts and glacier related processes. Moreover, those phenomena are expected to increase in the near future. These hazards constitute a major threat to human activity. Indeed, over the last century, temperatures have risen twice as fast as the northern hemisphere average, whereas precipitation has increased non-linearly and has become more discontinuous with an increase in the number of extreme rainfall events. Because of the increasing pressure on human settlements and infrastructure, there is a strong priority for policy-makers to implement hazard adaptation strategies from the local to the regional scale. To support and improve the decision-making process, numerical decision support systems may provide valuable information derived from multi-parametric (in-situ sensors, satellite data) observations and models, linked to computing environments, in order to better manage increasing risks. In this context, a demonstrator has been developed to simulate landslide occurrences by combining in-situ sensor data, satellite EO derived products and process-based models. The demonstrator targets three applications: 1) the possibility to quantify the complex landslide motion from space by combining advanced InSAR analyses techniques (SNAPPING) and advanced optical offset-tracking techniques (GDM-OPT) to monitor respectively low and high ground motion rates; 2) the possibility to assess and forecast, with a daily leading time and at regional scales, the occurrence of heavy rainfalls induced-shallow landslides, in terms of slope failure probability and sediment propagation towards the valleys; 3) the possibility to predict the activity (e.g. velocity) of large deep-seated and continuously active landslides from extreme rain events through the use of a combination of physics- and AI-based simulation tools. The analysis and simulation tools have been embedded in the Digital Twin for the Alps (DTA) platform together with advanced visualization tools (maps and time series) which have been specifically implemented to favor easy exploration of the products for several categories of stakeholders. Use cases in South Switzerland and South France are provided to demonstrate the capabilities of the platform. The data, services and technologies used for providing tailored information to the landslide operational and science community will be presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Digital Twin Component in Urban Flood Modelling - A Proof-of-Concept

Authors: Yu Li, Jefferson See Wong, Thanh Huy Nguyen, Joao Vinholi, Anis Amziane, Marco Chini, Patrick Matgen
Affiliations: Luxembourg Institute Of Science And Technology
Urban flood forecasting faces growing challenges due to rapid urbanization, climate change, and the increasing frequency of extreme rainfall events. The integration of Digital Twin (DT) technologies in flood forecasting offers a transformative approach by creating real-time, virtual replicas of urban systems that can simulate, predict, and mitigate flood risks with higher precision and adaptability. A core aspect of the Digital Twin in this study is its ability to combine real-time data with historical flood patterns to generate dynamic flood simulations. Through the integration of sensor data, such as rainfall and river gauge measurements, the DT system is capable of adjusting simulations as new data becomes available, thus improving accuracy over time. The system also incorporates urban infrastructure models, including drainage networks, building map, and surface elevation data, to simulate water flow and identify vulnerable areas in the catchment. As part of the ESA project (Urban DTC/SURE), we explore the integration of a Digital Twin Component for urban flood modelling within the Alzette catchment, focusing on improved flood identification and risk assessment through a combination of Earth Observation (EO) data, hydrological and hydraulic modeling, and scenario analysis. The research presents innovative techniques to address urban flood hazards using advanced analysis of SAR and multispectral satellite data, calibrated with field measurements, and enhanced with future climate change scenarios. The methodological framework involves four components: 1) leveraging high-resolution data from Sentinel-1 SAR data and Sentinel-2 multispectral imagery to identify and map buildings and floodwaters in urban areas, 2) calibrating a hydrological model using a blend of EO-derived data and field measurements and simulate discharge as the boundary conditions for the hydraulic model, 3) calibrating a hydraulic model with the use of EO-derived maps in 1), high-resolution building maps, and simulated discharge from 2) to provide more detailed predictions of flood inundation extent and water depth, and 4) assessing flood hazard and risk under different future climate change scenarios. One of the novel aspects of this DTC is the exploration of ‘what if’ scenarios to evaluate flood hazard and risk under different future climatic conditions and mitigation measures. Using near-future and mid-term climate change projections, the research assesses the potential impacts of changing precipitation and temperature patterns on flood dynamics. This scenario-based approach allows for the testing of adaptive strategies to mitigate the increased flood risks expected with climate change. Further ‘what if’ scenarios examine the effects of modifications to river geometry, such as changes to channel shape or bank reinforcement, on flood hazard and risk. These simulations help assess how modifications in river conveyance capacity can influence urban flood behavior, providing critical insights for future urban infrastructure planning. Flood hazard maps have been pre-computed for each climate change and river conveyance scenario, providing a comprehensive toolset for estimating the impact of climate physical risks and floodplain development projects in the selected urban areas. To do this we make use of the inventory of physical assets prone to be affected by flooding. The demonstration consists in displaying for each scenario selected by the user the hazard and risk maps and summarizing associated risk metrics on a dashboard (e.g. number of persons affected, critical infrastructure impacted). A scenario consists of a time period (i.e. reference/near future/mid-term future), an Representative Concentration Pathway (i.e. RCP2.6, RCP4.5, RCP8.5) and a river conveyance (i.e. increase or decrease of x%). The reference hazard and risk map corresponds to the 1981-2000 period. For all RCPs and river conveyance scenarios, changes with respect to the reference period will be highlighted on a map and dashboard as part of the demonstration. The research demonstrates the potential of combining EO data, hydrological modeling, and climate scenario analysis within a Digital Twin framework to improve urban flood modelling and risk management. The Alzette catchment serves as a testbed for validating this approach, offering a pathway for cities globally to adopt similar strategies for proactive flood mitigation, enhanced disaster preparedness, and informed urban planning in response to the growing threat of climate-induced urban flooding.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing water resources management and flood control merging earth observations and modelling reservoir behaviour in digital twins

Authors: Arjen Haag, Athanasios Tsiokanos, Albrecht H. Weerts
Affiliations: Operational Water Management, Deltares
Reservoirs play an important role in water security, flood risk, energy supply and natural flow regime around the world. Reservoir area can be observed from space (e.g. see https://www.globalwaterwatch.earth/). Reservoir volumes and/or level can be estimated, as for instance developed in the ESA project Surface-2-Storage (Winsemius and Moreno Rodenas, 2024). . Reservoir storage/volume can also be simulated (e.g. van der Laan et al., 2024) using a fast advanced distributed hydrological model (Imhoff et al., 2024). We present results and ongoing activities related to ongoing digital twin projects, among others DTC Hydrology Next , where we integrate earth observations of reservoir storage, estimated from reservoir surface area, with modeled reservoir behavior. These observations help to get a better understanding of the actual situation (inc reservoir rules) and help to provide actionable information for water resources management or flood control. Surface-2-Storage - Final Report, Winsemius, H.C. and A. Moreno Rodenas, 2024. Deltares, 11207650-002-ZWS-0020, 7 May 2024. Simulation of long-term storage dynamics of headwater reservoirs across the globe using public cloud computing infrastructure E. van der Laan, P. Hazenberg, A.H. Weerts, Science of The Total Environment, 10.1016/j.scitotenv.2024.172678, 2024. A fast high resolution distributed hydrological model for forecasting, climate scenarios and digital twin applications using wflow_sbm R.O. Imhoff, J. Buitink, W.J. van Verseveld, A.H. Weerts, Environmental Modelling & Software, 10.1016/j.envsoft.2024.106099, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Validation of geohazards products as part of the Digital Twin Component solution of the ESA GET-it project

Authors: Hugues Brenot, Nicolas Theys, Stefano Corradini, Arnau Folch, Daniela Fucilla, Gaetana Ganci, Fabrizio Pacini, Elisa Trasatti, Salvatore Stramondo
Affiliations: Royal Belgian Institute for Space Aeronomie (BIRA), Istituto Nazionale di Geofisica e Vulcanologia (INGV), Consejo Superior de Investigaciones Científicas (CSIC), Terradue
The development of services based on the exploitation of multi-sensor Earth observation data into models is essential to provide key information in the event of geohazards with an impact on people and society (like volcanic eruptions or earthquakes). The ESA Geohazards Early Digital Twin Component (GET-it) project is dedicated to the implementation of a prototype system designed to provide an interactive tool to users, e.g. institutional and commercial stakeholders. For two types of scenarios, seismic and volcanic crisis, GET-it aims at providing solutions to users to help in decision making and eventually mitigate the impact of geohazards. To do so, GET-it relies on four modules, which provide information on surface deformation, damaging events, quantitative forecasts of volcanic ash/SO2 clouds, and the thermal and rheological evolution of lava flows. These modules are built on 10 toolboxes targeting surface deformation and topographic monitoring (5 toolboxes based on Interferometric Synthetic Aperture Radar – InSAR, Global Navigation Satellite System – GNSS, and optical imagery), damage (1 box based on a combination of imagery), volcanic cloud occurrence and characterisation (2 boxes based on thermal infrared – IR – data from geostationary sensors), ground thermal anomaly and lava flow monitoring (2 boxes based on medium IR data from polar orbiting and geostationary sensors). This presentation shows the first results of the validation of the GET-it system for 3 case studies (the 2018 eruption of Mount Etna, Italy; the 2016 earthquake sequence in central Italy; and the 2022 eruption in La Palmas, Canary Islands, Spain). This validation is a two-steps process. The first step is the validation, error analysis and uncertainty quantification of the four GET-it scenario modules output. The second validation concerns the Digital Twin Component (DTC) solution and its associated functionalities. This validation is based on parameters that characterise the quality of the solution. An important part is to verify that the DTC solution matches the needs and requirements expressed by the users. Another aspect also relates to the possible scaling of the DTC component in an operational environment and whether the proposed solution would be fit to handle real-time situations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Mobile LiDAR Point Clouds to Urban Digital Twins: Advancing 3D Reconstruction With Quality Optimization

Authors: Xinyi He, Dr. Alvaro Lau Sarmiento
Affiliations: Wageningen University & Research
In recent years, aerial and satellite remote sensing products have gained popularity for various urban research applications. Nonetheless, there is a need for a cheaper and more readily accessible data source for small-scale, high-precision datasets, such as streets, buildings, and infrastructure. LiDAR provides just such a data source in the form of point clouds. Among the various LiDAR technologies, mobile laser scanning (MLS) offers much flexibility because it can be based on moving platforms such as backpacks and vehicles. However, MLS is sparse and noisy, limiting its application in urban digital twins. Therefore, a comprehensive improvement in the quality of MLS point clouds is essential for enhancing the accuracy of 3D reconstruction and advancing the usability of MLS in digital twins. Nowadays, specific steps for processing point clouds typically include alignment, denoising and filtering, segmentation and classification, feature extraction, and object extraction. However, improving point cloud quality focuses on the initial stages. Significant limitations exist in the previous studies: 1. There are not enough studies on point clouds from MLS sources. 2. A significant proportion of existing studies rely on static objects from publicly available datasets, such as models of a rabbit or mechanical gears, among others. 3. Many studies will evaluate the effectiveness of the denoising algorithms by artificially introducing noise. However, complex urban environments are facing more challenges, such as the interference of dynamic objects (e.g., vehicles and pedestrians). 4. Additionally, urban point clouds also need to solve the problems of massive data storage and the trade-off between modelling accuracy and simplicity. To address these challenges, this paper proposes a systematic and holistic data processing workflow tailored to MLS point clouds in urban contexts. This workflow covers multiple processing steps, including advanced denoising, filtering, and surface reconstruction techniques to improve point clouds' geometric and reconstruction accuracy. Firstly, all the articles and techniques on improving the quality of point clouds are systematically and comprehensively summarised through the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) review. PRISMA's screening results from 7,593 publications between 2023 and 10 August 2024 were drawn from five academic platforms. The literature came from five academic platforms: Google Scholar, Web of Science, Scopus, IEEE Xplore, and ArXiv. The search terms included 'LiDAR,' 'Point Cloud,' and 'Denoising.' The validated denoising methods include Random Sampling-based algorithms, KNN (K-Nearest Neighbors) based algorithms, CNN (Convolutional Neural Networks) based algorithms, PointNet-based algorithms, Manifold-based algorithms, and Normal-based algorithms. Secondly, experiments are conducted to verify which methods are particularly effective for the MLS point cloud in the streets of Leeuwarden in the Netherlands, which is our study area. Specifically, several algorithms were implemented to find the most suitable algorithms for processing specific MLS point clouds. These algorithms were based on PRISMA's screening results. The experiments evaluate the root mean square error (RMSE) and denoising rate by comparing the reference point cloud and Terrestrial Laser Scanning (TLS) in the same area as the target MLS point cloud. This study concludes that PointNet and its derivative algorithm, PointFilter, perform superior in processing MLS point cloud data. Subsequently, the MLS point cloud obtained from the alignment and denoising process is subjected to 3D reconstruction. Then, we measure the absolute trajectory error, relative position error, and surface distance error on the produced 3D models. These assessments validate the effectiveness of the proposed point cloud quality enhancement methodologies. This highlights the contribution to improving the accuracy and reliability of 3D reconstruction for building the digital twin of urban areas. This study establishes the groundwork for utilizing the MLS point cloud as a cost-effective data source for urban digital twins by proposing a comprehensive workflow to improve the quality of MLS data and target the accuracy of 3D reconstruction. Providing a high-precision and low-cost 3D database for urban digital twins can significantly accelerate the development and implementation of urban digital twins.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.05.05 - POSTER - Tipping points and abrupt change in the Earth system

There are elements of the Earth system, including ecosystems, that can undergo rapid transition and reorganisation in response to small changes in forcings. This process is commonly known as crossing a tipping point. Such transitions may be abrupt and irreversible, and some could feedback to climate change, representing an uncertainty in projections of global warming. Their potentially severe outcomes at local scales - such as unprecedented weather, ecosystem loss, extreme temperatures and increased frequency of droughts and fires – may be particularly challenging for humans and other species to adapt to, worsening the risk that climate change poses. Combining satellite-based Earth Observation (EO) datasets with numerical model simulations is a promising avenue of research to investigate tipping elements. And a growing number of studies have applied tipping point theory to satellite time series to explore the changing resilience of tipping systems in the biosphere as an early earning indicator of approaching a tipping point. This session invites abstracts on tipping points and resilience studies based on or incorporating EO, as well as recommendations from modelling groups that can be taken up by the remote sensing community, for example on early warning signals, products needed for model assimilation or novel tipping systems to investigate further using EO.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating Regime Shifts in Atlantic Sargassum

Authors: Brian Barnes, Chuanmin Hu, Yingjun Zhang, Deborah Goodwin, Amy Siuda, Jeffrey Schell
Affiliations: University Of South Florida, Eckerd College, Sea Education Association
Located in the subtropical North Atlantic Ocean, the Sargasso Sea (SS) draws its name from the floating brown macroalgae, Sargassum spp. Sargassum aggregations have been observed in the region for centuries, and are an integral component of the local biology and ecology - providing habitat for numerous marine species. In 2011, the footprint of Atlantic Sargassum increased to include a now-persistent population seasonally spanning the tropical North Atlantic, termed the Great Atlantic Sargassum Belt (GASB). As a result of this expansion, nearshore locations within the GASB domain now face devastating ecological and economic impacts when portions of this floating habitat inundate coastal environments. In this study, we investigate the formulation of the GASB, as well as additional large-scale shifts in Atlantic Sargassum as observed in both satellite data and long-term in situ net tows. In particular, we document dramatic changes in the abundance and seasonality of SS Sargassum occurring since 2015. Both the satellite and in situ net tow data indicate a substantial decline in Sargassum abundance in the North SS during the fall / winter period, accompanied by an increase during spring / summer. Similarly, the abundance has also dramatically increased in the South SS, particularly during this spring / summer period. Notably, the timing of the SS Sargassum increase matches that of the GASB seasonality. However, the long-term in situ observations indicate that the Sargassum morphotype most commonly observed in the GASB is rarely found in the SS. As such, the changes in SS Sargassum distribution are not sufficiently explained by transport from the GASB alone, with internal dynamics also driving the seasonal and long-term abundance cycles in the SS. Disentangling the forcings underlying these regime shifts may improve predictions of Sargassum distribution. Additionally, understanding these changes may provide insight into future climate-related alterations in North Atlantic Sargassum and subsequent impacts to associated fauna and flora.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: El Niño-driven cascading effects on global ecosystem resilience

Authors: Xiaona Wang, Miguel Mahecha, Yongmei Huang, Chaonan Ji, Dongxing Wu, Xiuchen Wu
Affiliations: Leipzig University, Remote Sensing Centre for Earth System Research, Beijing Normal University, Faculty of Geographical Science, German Centre for Integrative Biodiversity Research (iDiv), Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Wuhan University, School of Resource and Environmental Sciences
El Niño-Southern Oscillation (ENSO), as a dominant driver of interannual natural climate variability, profoundly influenced the global weather and climate patterns, as well as the terrestrial ecosystems. However, a quantitative determination of cascading effects of El Niño on the dynamics of terrestrial ecosystems is lacking, required for understanding the imprints of the large-scale climate variabilities on the Earth system. To address this, we constructed the directed and weighted climate networks using near-surface air temperature and soil moisture, allowing us to systematically evaluate how El Niño-driven climate anomalies affect the dynamics of ecosystem resilience. We first identified the influence patterns of El Niño on the variations in air temperature and soil moisture. We found that El Niño produces a significant teleconnection pattern, characterized by the increased anomalies in temperature and decreased anomalies in moisture. These effects are globally pervasive across terrestrial biomes, and accounted for most of the global hotspots. During extreme El Niño phases, most of terrestrial ecosystems experienced marked changes in their resilience. Furthermore, we quantified the cascading effects of El Niño on ecosystem resilience mediated by the variations in temperature and moisture. The cascading strength did not significantly vary across geographical distances, highlighting the global reach of these effects. The cascading processes were predominantly mediated by changes in soil moisture and air temperature, underscoring their pivotal roles in the resilience loss of ecosystem. Finally, we evaluated the global hotspots derived from the state-of-the-art Earth system models (ESMs) under future scenarios. We found that El Niño-induced warming and dying global hotpots are expected to expand spatially in the future, potentially further leading to a decline in ecosystem resilience. This study is vital to improve the investigation and prediction of imprints of El Niño-driven climate anomalies on ecosystem destabilization, aims to understand the dynamic interactions among different natural components of the Earth system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Vegetation resilience: What does it mean, how can we measure it, and how can it change? Conceptual simulations with a complex dynamic vegetation model

Authors: Sebastian Bathiany, Lana Blaschke, Andreas Morr, Niklas Boers
Affiliations: Technical University of Munich, Potsdam Institute for Climate Impact Research, University of Exeter
There have been many recent studies that aim to estimate vegetation resilience and its changes over time from satellite data. Typically, they define resilience as the ability of vegetation to recover from externally induced perturbations like fires or droughts. It can in principle be measured quantitatively as the rate of recovery after such events. In simple dynamical systems, other indirect metrics can also diagnose resilience even in absence of large perturbations. The most important of these metrics is autocorrelation. A loss of resilience over time ("slowing down") can thereby be detected as increasing autocorrelation. In simple dynamical systems, the resilience loss is also associated with an increasing sensitivity of a system’s stable state to external conditions. This is particularly meaningful in systems with catastrophic tipping points, where the stable state disappears at a critical parameter value. For example, there has been concern that the Amazon rainforest may be approaching such a tipping point due to global warming and deforestation. Recent studies, using the normalised difference vegetation index NDVI and/or vegetation optical depth VOD, have shown that resilience seems to be higher in wet regions of tropical forests compared to dryer regions, and that resilience has been decreasing in vast parts of the Amazon rainforest. Observations also show that there is a relationship between autocorrelation and the empirical recovery rates after perturbations, which confirms the high practical relevance of theoretical expectations. However, it is still unclear which properties of the vegetation and which processes determine the observed autocorrelation, its spatial differences, and its trends over time. For example, different vegetation indicators and frequency bands capture different parts and properties of the vegetation, and the nature of empirical disturbances is often unknown. Our contribution discusses idealised simulations with the state-of-the-art dynamic vegetation model LPJmL to illuminate how the resilience of natural forests and its indicators can depend on (i) different climates, (ii) vegetation composition (mix of plant functional types), (iii) the vegetation property considered, and (iv) the nature of the perturbation(s). We find that autocorrelation is typically in good agreement with the recovery time from large negative perturbations that affect all combined tree types similarly. However, there are exceptions where any of the factors listed above can play a role. In these cases, recovery rates or autocorrelation do not necessarily agree with each other, nor with the forest’s sensitivity to climate change. In particular, perturbations that change the relative abundance of tree types can yield different recovery rates than perturbations affecting all tree types in the same way. Also, vegetation variables that recover quickly when perturbed on their own (e.g. fluxes like net primary productivity) can still co-evolve with slower variables they depend on (e.g. the carbon stored in trees). We will reveal important mechanisms causing these features in the model, and test their relevance by conducting simulations in a more realistic setup (i.e. by forcing the model with observed climate in a geographically realistic domain), and by discussing the relevance of these mechanisms in the real world. Our results remind us that in high-dimensional systems, there is only one autocorrelation for each variable, but many possible perturbations and hence resiliences, unless we are already very close to a tipping point. Our results also highlight the need to understand the nature of perturbations and trends (e.g. climate- or ecologically induced) in real ecosystems, and the mechanisms and properties captured by satellite-derived indicators. Such knowledge needs to be combined with improved resilience monitoring methods to allow us to draw reliable conclusions about the future response of ecosystems to human interferences.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.07.08 - POSTER - Global and regional water cycle in the integrated human-Earth system, estimation of hydrological variables and hyper-resolution modelling

Water in all three phases and its cycling through the Earth system are essential to weather, climate and climate change, and to life itself. The water cycle is closely coupled with energy cycle and carbon cycle. Over continents, water cycle includes precipitation (related to clouds, aerosols, and atmospheric dynamics), water vapor divergence and change of column water vapor in the atmosphere, and land surface evapotranspiration, terrestrial water storage change (related to snowpack, surface and ground water, and soil moisture change), and river and groundwater discharge (which is linked to ocean salinity near the river mouth). Furthermore, the terrestrial water cycle is directly affected by human activities: land cover and land use change, agricultural, industrial, and municipal consumption of water, and construction of reservoirs, canals, and dams.

The EO for hydrology community is working towards datasets describing hydrological variables at a steadily increasing quality and spatial and temporal resolution. In parallel, water cycle and hydrological modellers are advancing towards “hyper-resolution” models, going towards 1 km resolution or even higher. In some cases such efforts are not just taking place in parallel but in collaboration. This session aims at presenting advances from each of the communities as well as demonstrating and promoting collaboration between the two communities.

Presentations are welcome that focus on at least one of the following areas:
- The global and regional water cycle and its coupling with the energy and carbon cycles in the integrated human-Earth system based on satellite remote sensing, supplemented by ground-based and airborne measurements as well as global and regional modeling
- New advances on the estimation of hydrological variables, e.g. evapo(transpi)ration, precipitation (note that there is another, dedicated session for soil moisture);
- Suitability of different EO-derived datasets to be used in hydrological models at different scales;
- Capacity of different models to take benefit from EO-derived datasets;
- Requirements on EO-derived datasets to be useful for modelling community (e.g. related to spatial or temporal resolution, quality or uncertainty information, independence or consistency of the EO-derived datasets, …);
- Downscaling techniques;
- Potential of data from future EO missions and of newest modelling and AI approaches (including hybrid approaches) to improve the characterisation and prediction of the water cycle.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of future EO mission needs for the study of the water cycle

Authors: Laura Soueidan, Dr. Vanessa Keuck, Dr. Armin Loescher, Dr. Craig Donlon
Affiliations: ESA
Earth Observation data has long been instrumental in advancing our understanding of the water cycle, with missions like SMOS, GRACE/-FO, ICESat-2 or SWOT, enabling the estimation of key hydrological variables, such as evapotranspiration, soil moisture, river discharge, and terrestrial water storage anomalies. From a science perspective, these datasets are critical for the study of the water cycle, especially in the context of climate change, where more frequent and extreme hydrological events are expected. High-resolution EO datasets are also becoming increasingly important for the evaluation of governments' compliance with public policies related to water resource management, climate adaptation, and sustainability. To address the evolving needs of the science community, an Earth Observation Reference Architecture is being developed as a standardized framework to support a European EO Ecosystem. This Reference Architecture provides a comprehensive set of design principles, guidelines, and best practices for creating a collaborative and flexible EO framework. It is designed to support the delivery of high-quality EO data with improved spatial and temporal resolution, accuracy, and rigorous uncertainty quantification. By facilitating interoperability across satellite constellations, ground segments and relevant non-space systems, it promotes a holistic monitoring of the Earth systems and their feedback loops. Providing insights into future Earth Observation requirements for hydrological science, scenario-based analyses are at the basis of this work ; By defining potential climate scenarios, we identify critical and supporting hydrological parameters, knowledge and observable gaps, in order to derive the standards that future EO missions need to meet. Potential synergies between sensors and satellite constellations are also explored for their ability to enhance data quality, coverage and continuity. Finally, this study investigates the potential of the assimilation of high-resolution EO data within existing land surface models, improving the characterization and prediction of the water cycle, with a particular focus on extreme events like floods and droughts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improving River Network Accuracy Using Graph Neural Networks and Multi-Sensor Remote Sensing Data

Authors: Hamidreza Mosaffa, Prof Christel Prudhomme, Dr Matthew Chantry, Prof Liz Stephens, Prof Christoph Rüdiger, Dr Michel Wortmann, Prof Florian Pappenberger, Prof Hannah Cloke
Affiliations: Department of Meteorology, University of Reading, Department of Geography and Environmental Science, University of Reading, European Centre for Medium-Range Weather Forecasts (ECMWF), European Centre for Medium-Range Weather Forecasts (ECMWF)
Rivers are dynamic systems that evolve over multiple timescales, from slow meandering processes to rapid flood-induced changes. However, most river networks are represented as static maps derived from Digital Elevation Models (DEMs), often failing to capture critical braided river systems and artificial channels. This limitation poses challenges in accurate hydrological and hydralic modelling, flood management, and water resource management. We propose to leverage Earth Observation data to refine river networks by using multi-temporal Sentinel-2 and Sentinel-1 SAR data at 30m resolution, which capture water bodies and flood extents across various flow regimes. By treating rivers as graph structures with nodes and edges, we use Graph Neural Network (GNN) models to identify and predict missing river connections (edges). We extract multiple features such as water extend, Normalized Difference Water Index (NDWI), Normalized Difference Vegetation Index (NDVI), flow direction, and flow accumulation over time, utilizing the GRIT river network dataset as a baseline for modifications. Through our methodology, potential nodes and edges are identified, and GNN algorithms—such as Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and GraphSAGE—are tested to predict the probability of missing edges. Validation is performed using OpenStreetMap (OSM) river data, ensuring the accuracy of the predicted network. Our case study focuses on Pakistan, a region characterized by extensive artificial channels and frequent flooding. The results demonstrate that our approach successfully identifies missing river segments, particularly artificial channels, and improves the completeness and accuracy of the river network. The promising outcomes of this study provide a scalable solution for global river network prediction and have significant implications for hydrological modelling, flood risk assessment, and water resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using high-resolution precipitation product for characterizing and modeling flow behavior in karst environments

Authors: Vianney Sivelle
Affiliations: Université de Montpellier
Groundwater constitutes the hidden part of the water cycle and so assessing its contribution raises important challenges such as (i) describing the Groundwater–Surface Water (GW-SW) interactions and (ii) closing the water budget with meteorological/climate forcings. To this day, the contribution of karst groundwater in the continental water cycle is unknown and deserves to be properly characterized. Karst environments are of primary importance for world heritage, ecosystem development, as well as freshwater supply for around 9% of the world population. Characterizing and modelling flow processes in karst groundwater systems requires high spatial and temporal resolution of meteorological forcings due to (i) small to meso-scale dimension of the recharge area (order of magnitude from 1 to 1000 km2), and (ii) the predominance of quick-flow processes, requiring infra-daily monitoring of environmental variables (e.g., spring discharge, piezometric head, physico-chemical parameters of karst water). Therefore, neither the Land Surface Model (LSM) nor climate model explicitly represent yet the contribution of karst groundwater systems in their estimation of the various components of the continental water cycle, while karst domain represents around 12% of continental surface. The recent development of high-resolution precipitation products (1km, 1day) creates new opportunities for karst hydrology, taking advantage of a suitable spatial resolution to better assess spatial heterogeneity of recharge processes. The latter may occurs either with diffuse recharge or concentrated recharge, playing an important role in the overall flow behavior of karst groundwater systems. Then, characterizing the concentrated recharge following significant precipitation event is of prime importance to assess quick-flow processes and so the effect of potential pollution coming from the surface to groundwater as well as flood processes. The present work aims to showcase some recent advances in karst hydrology including (i) the evaluation of precipitation products for characterizing and modeling the flow behavior of karst environments and (ii) the use of satellite precipitation data to better constraint the meteorological forcing of hydro-geo-logical modeling exercises.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing uncertainty in WaPOR global evapotranspiration data: Insights from using triple collocation and in-situ measurements

Authors: Bich Tran, Dr. Solomon Seyoum, Dr. Johannes van der Kwast, Prof. Dr. Remko Uijlenhoet, Prof. Dr. Graham Jewitt, Dr. Marloes Mul
Affiliations: IHE Delft Institute for Water Education, Delft University of Technology
Evapotranspiration (ET) is a key process linking the water, energy, and carbon cycles of the Earth. Accurately estimating ET is essential for hydrological studies but remains challenging due to its complex, scale-dependent processes. Satellite remote sensing (RS) has been applied in several process-based models to estimate spatial ET, which produces a range of continuously updated global data products (e.g., MODIS16, GLEAM, SSEBop, and WaPOR). Among these, the Food and Agricultural Organization of the United Nations (FAO)’s portal to monitor water productivity through open-access of remotely sensed derived data (WaPOR) provides global ET data (WaPOR-ET) at a high spatial resolution (300 m) and 10-day temporal intervals. The availability of such hyper-resolution data at global coverage offers significant potential for many hydrological and agricultural applications. However, comprehensive information on the quality or uncertainty WaPOR-ET remains limited, posing a challenge for its assimilation into hydrological models. The most common method for assessing RS-ET uncertainty involves direct comparison with ET estimation from eddy covariance (EC) measurements. While valuable, this approach is constrained by the sparse distribution of EC sites in many regions and inherent uncertainties, including energy balance closure and flux footprint issues (Tran et al., 2023). To address these limitations, this study evaluated WaPOR version 3 global ET data by direct comparison with EC measurements from FLUXNET regional networks, accounting for EC uncertainties. We analyzed WaPOR-ET uncertainty across land cover types, climate regions, and elevation ranges. In addition, we compared direct comparison with triple collocation analysis using multiple high-resolution (30 m) ET models from the OpenET project over the contiguous United States (Volk et al., 2024). Using extended triple collocation method, we characterized uncertainty information spatially and examined how different triplet combinations affected the results. Our findings show good agreement between the two approaches at perennial cropland sites. Meanwhile, results for seasonal cropland, forest, grassland, shrubland sites varied greatly depending on the triplets used. In general, triplets combining WaPOR (a two-source Penman-Monteith model) with one-source surface energy balance models showed greater divergence from EC-based uncertainty estimates. No single triplet consistently aligned with direct comparison results across all land cover types. These results highlight the capability of uncertainty assessment methods and contribute to the roadmap of quality assessment for RS-ET products, which helps address the uncertainty requirement of hydrological modelling community. References Tran, B.N., Van Der Kwast, J., Seyoum, S., Uijlenhoet, R., Jewitt, G. and Mul, M., 2023. Uncertainty assessment of satellite remote-sensing-based evapotranspiration estimates: a systematic review of methods and gaps. Hydrology and Earth System Sciences, 27(24), pp.4505-4528. Volk, J.M., Huntington, J.L., Melton, F.S., Allen, R., Anderson, M., Fisher, J.B., Kilic, A., Ruhoff, A., Senay, G.B., Minor, B. and Morton, C., 2024. Assessing the accuracy of OpenET satellite-based evapotranspiration data to support water resource and land management applications. Nature Water, 2(2), pp.193-205.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Interplay between Earth Observation and the GEWEX Regional Hydroclimate Projects

Authors: Dr. Peter Van Oevelen, Dr. Ali
Affiliations: International Gewex Project Office
The Global Energy and Water Exchanges (GEWEX) project of the World Climate Research Program was established in the late ‘80s for primarily two reasons 1) to understand better the role of the ‘slow ‘component -i.e. land and its associated processes including those of the atmosphere above it – of the Earth system in climate change, and 2) to explore and utilize the new earth observational data that came available with the just launched NASA EOS program as well as the associated ESA and JAXA programs at the time. In particular, the large-scale Continental Scale Experiments (CSEs) set out to look at large river basin (continental scale) processes by combining long term in situ observations, dedicated field experiments and programs as well as with Earth Observational Data that went beyond land surface characterization. The first of these experiments was the GEWEX Continental Scale International Project (GCIP) in 1993 focusing on the Mississippi river basin. Many of these large-scale experiments followed and in the early 2000s these were renamed the GEWEX Regional Hydroclimate Projects (RHP). The main objective of all these RHPs is to improve our understanding, improve our weather and climate models along with better prediction and projections. However, since these projects are regionally based, local challenges will be the key drivers and hence no project is the same either in organization, scope or output. Regional processes such as e.g. land-atmosphere processes at the larger scales can only be studied through a combination of detailed processes studies from lab to field scale along with upscaling using a variety of methods almost always include the use of satellite remote sensing data. The need for these data has both influenced the type of instruments and satellites developed and launched by the various space agencies but also the new instruments and data have very much broadened the scope of what challenges can be addressed. This presentation will provide a historical overview of the RHPs and how they changed along with several highlights of the crucial interplay between earth observational data and earth system science at the global and regional scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Surface Temperature and Soil Moisture Estimates Across Cropland and Agroforestry: UAV-borne Imagery and Ground Sensors Synergy

Authors: Jiri Rous, Jan Komarek
Affiliations: Department of Spatial Sciences, Faculty of Environmental Science, Czech University Life Sciences Prague, Kamýcká 129, Praha – Suchdol 165 00, Czech Republic
Challenge Long-term environmental monitoring is essential for understanding soil-vegetation-atmosphere interactions in agroforestry systems, where tree strips alternate with crop fields, soil moisture, and temperature vary significantly, driven by complex microclimatic and ecological processes. Unmanned Aerial Vehicles (UAVs) with thermal and multispectral sensors offer high spatial resolution for capturing these dynamics, but their deployment faces critical challenges. Atmospheric influences, particularly humidity and wind, distort sensor readings, complicating data accuracy. Applying correction methods is necessary to address these issues and extract reliable information. The goal is to develop a reliable identification key for soil moisture and temperature estimation using UAV sensors, enabling precise and scalable monitoring. Achieving this would support improved land management, water efficiency, and deeper insights into the functioning of integrated landscapes. Methodology Following lab calibration, seventeen Tomst TMS data loggers were deployed, seven in forested and ten in agricultural strips. Monthly UAV flights at 300 m above ground height were conducted using senseFly DuetT and Micasense RedEdge MX sensors. The imagery was captured around solar noon, processed in image-matching software, georeferenced using ground control points (GCPs), and radiometrically calibrated. The first dataset includes temperature measurements at three levels (14 cm below ground, on the ground, and 14 cm above ground) and soil moisture data. The second dataset comprises thermal and multispectral UAV mosaics, created monthly from July 2023 to October 2024, focusing on July to September 2023 and May to September 2024. Supplementary meteorological data—wind speed, humidity, temperature, and precipitation—from a nearby station enhance the dataset. The study area is a 1.42 ha strip pattern of forested and agricultural land within the Amalie Smart Landscape (CZU). A Generalized Additive Model (GAM) was employed to investigate relationships between UAV-based temperature, stripe type, and meteorological factors. Multispectral data, particularly the NIR band, was used to identify vegetation - crops, trees, or shrubs, a key determinant of temperature variability. RMSE analysis was performed to compare UAV temperatures with ground sensors and adjusted for humidity effects. Expected results Initial analysis explored correlations between soil moisture and ground-level temperatures, revealing coefficients of -0.39 (ground) and -0.45 (below ground). These findings informed the decision to integrate both UAV sensors. Preliminary findings suggest that UAV-estimated temperatures correlate best with below-ground sensors, yielding an RMSE of 2.86 °C. Unexpectedly, above-ground temperatures exhibited higher RMSE values (11.2 °C), prompting further investigation into sensor calibration and environmental influences. Humidity correction significantly improved agreement between above-ground sensor data and UAV temperatures, reducing RMSE to 4 °C. GAM results indicate that vegetation presence, rather than its type or height, drives temperature variations, as detected by UAV, highlighting the dominant role of canopy cover in moderating soil and surface temperatures through shading and evapotranspiration. Outlook for the future This research demonstrates the potential of UAV-based thermal and multispectral imaging for soil parameter estimation but also reveals significant challenges in aligning UAV and ground-based measurements. These insights underscore the complexity of synergizing UAV-based estimations with in-situ data and highlight the need for robust correction factors to account for environmental variability. Future work will focus on refining humidity correction models, expanding the analysis to include seasonal trends, and exploring machine learning techniques for enhanced prediction accuracy. Additionally, further calibration and validation of UAV sensors will aim to reduce discrepancies with in-situ data. By advancing UAV-enabled environmental monitoring, this study contributes to scalable and non-invasive approaches for understanding landscape-level soil and vegetation dynamics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The new HydroSHEDS v2.0 database derived from the TanDEM-X DEM

Authors: Carolin Keller, Leena Warmedinger, Larissa Gorzawski, Martin Huber, Bernhard Lehner, Günther Grill, Michele Thieme, Birgit Wessel, Achim Roth
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center, Company for Remote Sensing and Environmental Research (SLU), McGill University, Department of Geography, Confluvio Consulting Inc., World Wildlife Fund
The increased availability and accuracy of recent remote sensing data accelerates the development of high-quality data products for hydrological modelling. Accurate representation of the Earth's surface, including all water-related features, is crucial for simulating runoff and other hydrological processes. In this contribution we introduce HydroSHEDS v2.0, the second and refined version of the well-established HydroSHEDS dataset. It provides global seamless high-resolution hydrographic information and is developed through an international collaboration involving the German Aerospace Center (DLR), McGill University, Confluvio Consulting, and World Wildlife Fund, HydroSHEDS v2.0 builds on the TanDEM-X mission's digital elevation model (DEM) to offer enhanced accuracy and expanded geographic coverage compared to its predecessor. While the first HydroSHEDS version relied on the Shuttle Radar Topography Mission (SRTM) DEM, HydroSHEDS v2.0 benefits from the TanDEM-X DEM, which provides a higher resolution of 0.4 arc-seconds globally and includes regions beyond 60°N latitude, previously uncovered by SRTM. Advanced pre-processing techniques ensure that HydroSHEDS v2.0 preserves the high-resolution details of the TanDEM-X DEM. These techniques include the generation of a global inland water mask and its usage for filling invalid and unreliable DEM areas, delineating global coastlines with manual quality control, and reducing distortions caused by vegetation and urban areas. A sequence of automated hydrological conditioning steps further refines the DEM, incorporating void filling, outlier correction, and algorithms to optimize hydrological consistency. Finally, extensive manual corrections using various ancillary data sources improve river network delineation in areas where high uncertainties exist for DEM-derived products, such as areas with flat terrain or anthropogenically modified landscapes. The resulting hydrologically conditioned DEM has a resolution of 1 arc-seconds and ensures accurate derivation of hydrologic flow connections, forming the basis for core products such as flow direction and flow accumulation maps. In the future final HydroSHEDS product, these gridded datasets are complemented by secondary vector-based information on river networks, nested catchment boundaries, and associated hydro-environmental attributes. Together, these products create a standardized, multi-scale database in the same structure and format as the original version and support applications ranging from local to global scales. In our presentation we will give an overview of the production and present a demonstration of the novel data products and the pre-processing workflow for selected test sites. The new HydroSHEDS v2.0 dataset offers a consistent and easy-to-use framework for hydrological and hydro-ecological research. The main release, scheduled to start in 2025 under a free license, will provide researchers and practitioners with a robust tool for diverse applications. The HydroSHEDS v2.0 dataset will be available at www.hydrosheds.org.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sensitivity of Sentinel-1 σ0 backscattering to crop phenology and row orientation in irrigated fields

Authors: Martina Natali, Prof.dr.ir. Susan Steele-Dunne, Sara Modanesi, Gabrielle De Lannoy, Alessio Domeneghetti, Christian Massari
Affiliations: CNR-IRPI, Department of Civil and Environmental Engineering, University of Perugia, Department of Geosciences and Remote Sensing , Faculty of Civil Engineering and Geosciences, TU Delft, Department of Earth and Environmental Sciences, KU Leuven, Department of Civil, Chemical, Environmental and Materials Engineering, Alma Mater Studiorum - University of Bologna
In recent years, the availability of high-resolution satellite remote sensing observations has led to an increasing number of applications at very high spatial resolutions. In hydrological modeling, precision agriculture and irrigation applications, resolutions of about 1 km or below are relevant to account for the high spatial variability of soil moisture and provide more accurate estimations of the components of the water cycle. Synthetic aperture radar (SAR) platforms such as e.g. Sentinel-1 provide high resolution backscattering observations (~ 20 m) which are not hampered by clouds and the atmosphere and are used to estimate soil moisture in all weather conditions via retrieval algorithms or data assimilation in land surface models. However, σ0 values are sensitive to surface roughness, vegetation water content, plants’ structure and in agricultural areas also to the orientation of fields’ rows. In regions characterized by high spatial heterogeneity of plots with different landcover and crop types, retrievals of bio-geophysical quantities with methods that do not distinguish between parcels may be affected by uncertainties which remain poorly investigated. In this study we explored the behavior of Sentinel-1 σ0 data on several fields and crops in irrigated agricultural areas in northern Italy. Fields are either arable land, orchards or vineyards, with areas from ~ 1 ha to ~ 20 ha. The study areas are covered by several Sentinel-1 orbits with local incidence angles varying from 30° to 45°. For each individual parcel we evaluated the sensitivity of σ0 values and their variance with respect to vegetation indices such as e.g. NDVI from Sentinel-2, and we explored their potential uses for early season crop classification. Furthermore, we estimated the bias on the mean value of σ0 due to different fields row orientations for different observational geometries. To account for differences in the mean incidence angle of the orbits we applied an angle-based bias removal procedure on the individual fields, and we assessed its impact on the above-mentioned experiments with respect to results with original biased data. Quantifying these effects on backscattering, which are usually not accounted for at the 1 km scale, has potential applications in row orientation identification and early season crop classification and can help in better estimate the uncertainty in soil moisture, irrigation and vegetation parameters retrieval over intensively cultivated and heterogeneous areas, contributing to precision agriculture applications and water resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The LSA SAF evapotranspiration and surface energy fluxes in drought monitoring across the field of view of the Meteosat Second Generation satellite

Authors: José-Miguel Barrios, Alirio Arboleda, Jan De Pue, Françoise Gellens-Meulenberghs
Affiliations: Royal Meteorological Institute Of Belgium
Alterations to normal weather patterns have been reported from virtually all regions of the world in recent years. Such alterations occur often (but not exclusively) in the form of anomalies in the intensity and frequency of precipitation and/or increased evaporative demand (due to higher temperatures). These anomalies in precipitation and temperature patterns may lead to droughts posing multiple challenges to socio-economic development and ecosystem functioning. Evapotranspiration impacts the humidity conditions at the Earth’s surface and drives the partition of the net incoming radiation into the latent (LE) and sensible (H) heat fluxes returning into the atmosphere. Dry conditions will result into a higher weight of the H in the partitioning of outgoing surface heat fluxes and will produce higher temperatures. Conversely, humid conditions at the surface would represent a higher LE to H ratio which entails a cooling effect. In consequence, the analysis of the energy partitioning at the surface can be informative on the humidity conditions in time and space and, therefore, useful in drought monitoring. This study explored the relationship between LE and H in time and space and its potential to detect the extent and intensity of abnormally dry conditions. The analyzed metric was the evaporative fraction (EF) and the data source was the near-real time LE and H estimates generated in the frame of the LSA SAF operational service (https://lsa-saf.eumetsat.int) for Europe, Africa and Eastern South America. The LSA SAF evapotranspiration and surface energy fluxes are largely based on observations by the Meteosat Second Generation (MSG) satellite in addition to meteorological fields and ancillary datasets. The LSA SAF data are generated and made available in near-real time and cover the period from 2004 till present day. The study analyzed the drought occurrences in recent years by computing the anomaly in EF with respect to statistical aggregates derived from the LSA SAF LE and H in the period between 2004 and 2020; i.e. the first 17 years of operations of the MSG satellite (Barrios et al., 2024). Anomalous EF conditions detected in the analysis were contrasted to drought reports derived from commonly used drought indicators for the analyzed period. The study revealed the sensitivity of the LSA SAF LE and H estimates to droughts when processed in the form of EF anomalies. A significant degree of correspondence to drought events reported for the study period (for instance, the extended drought in Europe in 2022) was observed when compared to reports from other sources. The relevance of this finding is related to the timeliness of the LSA SAF products (near-real time) and suggests the potential of this dataset to support drought monitoring across the field of view of the MSG satellite.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating Water and Energy Fluxes Using ECOSTRESS LST Imagery: Validation Against the ICOS’ Warm Winter 2020 Database

Authors: Héctor Nieto, Vicente Burchard-Levine, Benjamin Mary, Miguel Ángel Herrezuelo, Radoslaw Guzinski
Affiliations: CSIC, DHI
The assessment of evapotranspiration (ET) at a reasonable accuracy is crucial to reliably monitor and manage irrigation and fresh water resources. In recent ESA projects (SenET/ET4FAO) and publications, we showed that merging Sentinel-2 shortwave with Sentinel-3 thermal capabilities have proven useful for field-scale monitoring ET at regional and national levels. However, the lack of an operational thermal mission with high spatial resolution (<100m) and frequent revisit time (< 1 week) still poses some limitations for water use management. Both listed limitations should be addressed with the upcoming thermal missions such as the Copernicus Land Surface Temperature Mission (LSTM). In order to support the upscaling and transfer of products and services to these future satellite missions, several initiatives have been promoted by ESA that will pave the ground for downstreaming future operational applications to end users and stakeholders. These initiatives aim to use datasets that could approximate the information captured by future missions. In the particular case of ET and water resource monitoring, the ECOSTRESS mission, onboard the International Space Station, is key as it can provide robust measurements of Land Surface Temperature at high spatial and temporal resolutions, and with variable overpass times. This study was performed in the scope of ESA’s EO MAJI and MULTIWATER projects, in which we are evaluating the performance of TSEB modelling framework using ECOSTRESS LST, with additional inputs generated using the methodology inspired by the Sen-ET and ET4FAO projects. Firstly, biophysical traits are derived using Sentinel-2 imagery under a hybrid model inversion of ProspectD+4SAIL, including the derivation of total and green LAI. Then, weather forcing from ERA5 is topographically corrected and driven to a blending height of 100m above ground, while canopy height for forests is derived from the 2019 GEDI Global Forest Canopy Height and other ancillary canopy parameters in TSEB, such as fractional cover for clumped vegetation and effective leaf width, are derived from a Look-Up-Table based on IGBP land cover type. Furthermore, we used the ICOS Warm Winter 2020 database as validation dataset, since it covers up to 43 sites over the ECOSTRESS spatial extent, in order to include a large number of water limiting cases and varying biomes under the evaluation. The results presented here show that TSEB produced reasonable results, with RMSE of 79 W m⁻² in λE (Pearson r=0.8) and less that 1mm errors for daily ET. Furthermore, this approach proved to be robust in semi-arid and water limited conditions such as savannas and open shrublands, in which the model showed no significant decrease in performance with reducing soil water content and increasing climatic aridity conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Digital Twin Earth Hydrology precipitation: overcoming single products limitations

Authors: PhD Paolo Filippucci, PhD Luca Ciabatta, PhD Christian Massari, PhD Luca Brocca
Affiliations: Istituto di Ricerca per la Protezione Idrogeologica (IRPI), Consiglio Nazionale delle Ricerche (CNR)
In recent years, the European Union (EU) Green Deal and the EU Data strategy have called for the development of Digital Twins of the Earth (DTE), to integrate the latest advancements in Earth Observation (EO) systems, models, AI and computing capacities. These digital models are necessary to visualize, monitor and forecast natural and human activities on the planet, supporting sustainable development and mitigating ongoing climate change impacts. In this context, the European Space Agency (ESA) proposed the DTE Hydrology project, which focuses specifically on the water cycle, hydrology and its different applications. Within this project, accurate, high-resolution (1 km-daily) data for key variables of the water cycle are collected to simulate the water cycle, hydrological processes, and their interactions with human activities. Among these variables, precipitation is of paramount importance due to its impact on agriculture, water resource management, socio-economic development and disaster mitigation. However, in-situ monitoring stations are declining globally and in most countries are insufficiently dense in most countries to provide adequate data. Satellite-based precipitation estimations are hence crucial to bridge the spatial and temporal data gaps affecting such regions. To address this, DTE-Hydrology precipitation measurements are derived from various EO satellite sources and approaches, which are merged with reanalysis data to create an optimal product which overcomes the limitations of the individual datasets. Specifically, precipitation information from IMERG-Late Run, SM2RAIN ASCAT (H SAF) and ERA5 Land are first downscaled and then merged. The downscaling process leverages high-resolution ancillary information" on precipitation spatial variability obtained from the Climatologies at High resolution for the Earth’s Land Surface Area (CHELSA) climate dataset, while the merging weights are derived using Triple Collocation. The resulting product was assessed through comparison with multiple datasets, including coarse resolution ones such as H SAF, IMERG-LR, ERA5, EOBS, PERSIANN, CHIRP, GSMAP, as well as high-resolution products like EMO, INCA, SAIH, COMEPHORE, MCM, 4DMED, confirming its strong performance
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Precipitation rate estimation from SWOT: a pixel-wise data-driven approach using random forest with boosting

Authors: Aurélien Colin, Romain Husson, Bruno Picard
Affiliations: Collecte Localisation Satellite, Fluctus
Precipitation is a key component of the Earth’s hydrological cycle, influencing water resource management, agriculture, and disaster risk mitigation. Accurate rainfall estimation is vital for weather forecasting, flood prediction, and climate modeling. It is also of tremendous importance for remote sensing applications as rainfall often interferes with other phenomena of interest, leading to misinterpretation of observation, such as the noise introduced by rainfall on the wind estimation from SAR imagery. While ground-based systems like NEXRAD provide detailed precipitation data, their coverage is limited, particularly over oceans. Satellite missions, such as the Surface Water and Ocean Topography (SWOT), offer global coverage and open possibilities for rainfall estimation over the ocean. Though SWOT’s primary mission is to measure surface water and ocean topography, its Ka-band Radar Interferometer (KaRIn) can potentially provide indirect precipitation data. This study explores the use of SWOT’s radar measurements for rainfall estimation. By collocating SWOT radar data with ground-based NEXRAD observations, an ensemble learning algorithm, XGBoost, is trained to estimate precipitation rates. Estimates are performed pixel-wise at a resolution of 2 km/pixel, using a set of features that are either local (e.g., the Normalized Radar Cross Section, NRCS) or computed over patches (e.g., the first four moments of the NRCS distribution). The input features also include a wind speed prior obtained from an atmospheric model. Since the resulting model struggles to reproduce the extremes of the precipitation rate distribution, a quantile-mapping post-processing step is applied to ensure accurate predictions for both low and high precipitation rates. Given the exponential decrease in the proportion of pixels with increasing precipitation rates, the model is evaluated using the Pearson Correlation Coefficient (PCC) of the logarithm of precipitation rates. This approach accounts for both low and high precipitation values. The PCC reaches 52.8%, which is on par with the correlation between two NEXRAD systems observing the same areas (52.9%). From a classification perspective, considering three categories based on thresholds of 1 mm/h and 10 mm/h, the random forest achieves an accuracy of 44.8% for the [1, 10] mm/h category, with a non-detection probability of 51.1% and an overestimation probability of 4.1%. In comparison, the corresponding results for two NEXRAD systems observing the same area are 53.3%, 42.7%, and 4.0%, respectively. Currently, collocations of SWOT with observation systems sensitive to precipitation rates remain limited, as the satellite was launched only two years ago. However, the increasing availability of data is expected to further enhance the performance of the rainfall detection system. Since observations are acquired globally, SWOT has the potential to serve as an additional source of information for hydrographic studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards an updated ESA Earth System Model: Showcasing the Improvements in the Hydrological Model of LISFLOOD on the Example of Central Asia around Lake Issyk-Kul

Authors: Eva Boergens, Laura Jensen, Robert Dill, Tilo Schöne, Alexander V. Zubovich, Linus Shihora, Henryk Dobslaw
Affiliations: GFZ Helmholtz Centre For Geosciences, Central-Asian Institute for Applied Geosciences (CAIAG)
The ESA Earth System Model (ESA-ESM) is a synthetic model of the time-variable gravity field of the Earth. ESA-ESM consistently combines models of mass transport of hydrology, oceans, atmosphere, ice, and solid Earth. The current version was published in 2015 with back then up-to-date model input. The hydrology in the current ESA-ESM version is modelled by the Land Surface Discharge Model (LSDM) but will be updated by the hydrological model OS LISFLOOD in the new ESA-ESM version 3.0. Here, we test and validate the hydrological update of ESA-ESM in a test region that is characterized by various hydrological processes challenging to model as well as a good coverage with observations. Western and Central Asia contain the largest endorheic region, including the Caspian Sea basin, the Tarim Basin, the Central Asian Internal Drainage basin, and the Lake Issyk-Kul basin. Although Lake Issyk-Kul (Kyrgyzstan) lies at 1600m elevation in the Tianshan Mountains, it does not freeze over in the winter months. The hydrology of the region is dominated by the storage in several large to medium-sized endorheic lakes (e.g., Lake Balkhash ~400km to the North), artificial reservoirs (e.g., Kapshagay Reservoir ~150km North), and the snow cover during the winter months. In addition, melting glaciers play a major role in the region’s hydrology. The lake and its surrounding mountains are well observed with in-situ stations maintained in cooperation with the Central-Asian Institute for Applied Geosciences (CAIAG), Kyrgyzstan, with the GFZ Helmholtz Centre for Geosciences, Germany. The in-situ observations and the ice-free winters already made Lake Issyk-Kul an ideal test site for calibrating satellite altimetry. Snow and glacier storage, endorheic lakes, and reservoirs with unpublished discharge rates all pose difficulties in the hydrological modelling, which makes the Issyk-Kul region a suitable test region for new hydrological model developments. Compared to the previously used LSDM, OS LISFLOOD produces more realistic terrestrial water storage estimates and has several more advantages. OS LISFLOOD is an open-source project developed by the Joint Research Centre of the European Commission. The current version of OS LISFLOOD is running at a spatial resolution of 0.05°, compared to the 0.5° LSDM resolution. Surface water storage of lakes and reservoirs is more realistic due to a significantly larger number of lakes and reservoirs included in the model (currently globally, 463 lakes and 667 reservoirs in OS LISFLOOD vs. 28 lakes and one reservoir in LSDM). A further advantage of OS LISFLOOD is the new development of including endorheic lakes. Around 18% of the land surface drains into endorheic lakes. Thus, their consideration is a large step toward more realistic storage estimates. In contrast to LSDM, OS LISFLOOD also models anthropogenic water use for domestic, industry, and livestock consumption and irrigation, although the latter plays a minor role in the northern part of the test region. We investigate how all these progresses in hydrological modelling influence the performance of the ESA-ESM in the test region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Low-Rank Matrix Completion for Denoising, Gap-Filling, and Temporal Extension of Hydro-Variable Time Series.

Authors: Dr. Karim Douch, Dr, Peyman Saemian
Affiliations: ESA ESRIN, GIS, University of Stuttgart
Over the past two decades, the GRACE and GRACE-FO missions have revolutionized terrestrial water cycle monitoring by introducing a novel independent observable: monthly terrestrial water storage anomalies (ΔS). However, these time series face limitations, including observation gaps, significant errors, and insufficient length for robust climatic studies. In parallel, the proliferation of Earth observation data and advancements in computational modelling have led to numerous hydrological products estimating variables such as precipitation (P) and evaporation (E). Despite their utility, these products often exhibit substantial discrepancies due to their model-dependent nature. Consequently, practitioners aiming to analyse regional hydrological trends must carefully select datasets based on factors like spatial resolution, temporal coverage, and, more often than not, their ability to achieve water balance closure at the basin scale - a persistent challenge in hydrological studies. To address these issues, we propose a statistical approach grounded in low-rank matrix approximation and completion. This method enables simultaneous data imputation, GRACE(-FO) time series back-extension, and denoising of P, E, and ΔS products, along with in-situ discharge measurements. The core concept of the proposed algorithm is that these 4 quantities can be represented by only 3 empirical functions in virtue of mass conservation. Consequently, a matrix comprising multiple estimates of these variables should be well-reconstructed using a low-rank representation. In this study, we applied our approach and conducted extensive numerical analyses on 46 river basins worldwide, utilizing five precipitation products, along with four evaporation and four TWS datasets spanning 1995–2022. We evaluated the impact of matrix rank selection and discharge time series gaps on imputation accuracy. Additionally, we explored the benefits of embedding these time series into a Hankel matrix to incorporate temporal autocorrelation. Our findings demonstrate that a rank-3 or rank-4 matrix strikes an optimal balance between data fitting and extrapolation, reducing the average water balance misclosure by at least 30%, even during periods requiring data imputation. This approach offers a robust framework for improving the accuracy and usability of hydrological datasets in basin-scale studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Upgrading of water resources assessment including green water quantification evaluated thanks to Earth Observation

Authors: PHD Veronique Miegebielle, Doctor Odile Rambeau
Affiliations: TotalEnergies
The preservation of freshwater resources is a current topic, concerning everyone on the planet. The global consumption of freshwater is worldly shared with 10% to be preserved for the Human as a vital resource, 20% for the industries and 70% for the agriculture. For the survival of the population the 10% of the freshwater resources to be dedicated to their population is the responsibility of authorities of the different countries. Decrees can be issued to restrict water withdrawals, limiting the productivity of the industries but also of agriculture which is involved in feeding the population. A balance must be found between the vital needs of the population and the water resources. Looking at the calculation of the water footprint of human activities, a major part of the rainfall is not considered, due to difficulties to quantify it, even if it represents 60 to 70 % of the precipitation into the water cycle. This part of water not considered in our predictions and recommendations of water use is” Green Water”. The Green Water is the part of water infiltrated into the soil, used by the vegetation to grow and evapotranspired into the atmosphere before condensation and reprecipitation. With a better estimation of the water footprint including green water volume, the water footprint of human activities would be more representative of reality. The aim of this project is to explore different methodologies proposed in the literature in order to establish a valid green water model based on remote sensing data (satellite & drone images) supported by field data. Thanks to Earth Observation, pictures taking and analysis, evapotranspiration of the vegetation has been calculated on different areas of interest in Southwest of France. Different models have been used. Satellite imagery became wildly used to study green water and evapotranspiration. Nagler & al. proposed, in 2004, to calculate Evapotranspiration (ET), T° in °C and EVI (Enhanced Vegetation Index). Other models based on remote sensing indicators or sensors on sites, with the Two-Source Energy Balance (TSEB) or FAO-56 (Allen & al. 1998, Colaizzi & al. 2014, Alhousseine. 2018) also exist, focusing on agriculture with good results. To explore and validate the models, field measurements (in situ) have been planned. materials have been used as pyranometer measuring solar radiation; weather stations measuring rainfall, air temperature and moisture; capacitance probes measuring both temperature and ratio of water in soil at various depths, anemometers measuring wind direction and speed. Remote sensing analyses have been done using optic multispectral satellite images and drone multispectral acquisitions. This paper presents the results of green water daily iso quantification on a field test during a part of the year, covering 2 seasons, comparing models results and field measurements.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Is It Possible to Translate Sentinel-1 Images to Field-Scale ET Product Using Transformers Trained With EEFlux data?

Authors: Fatma Selin Sevimli, Mustafa Serkan Isik, Prof. Dr. Esra Erten
Affiliations: Istanbul Technical University, Geomatics Engineering, Heavy Finance UAB, OpenGeoHub Foundation
Evapotranspiration (ET) is an essential component of the water cycle, as it helps improve agricultural productivity and maintain the balance of climate and ecosystems. With its tight connection to surface temperature, one of the most commonly used remote sensing approaches for ET estimation is the METRIC (Mapping Evapotranspiration at High Resolution with Internalized Calibration) method. METRIC integrates satellite-derived Land Surface Temperature (LST) and optical imagery with weather data to estimate ET at high spatial resolution, such as the 30 m resolution Landsat based Analysis-Ready Data (ARD); Earth Engine Evapotranspiration Flux (EEFlux). However, EEFlux data often contains spatial gaps because of the limitations of Landsat's thermal band acquisitions, and cloud coverage, limiting the temporal resolution, remains a persistent challenge. To address these issues, recently a weakly supervised U-net architecture was trained to learn EEFlux-based ET data from Sentinel-1 images for field-scale ET estimation [1]. Although the initial results are promising, additional research is required to assess the impact of long-term phenological and meteorological information on the learning representation. In this study, a spatio-temporal modeling of ET time series was conducted using multisource Earth Observation (EO) data collected for cotton fields in Sanliurfa, a city in the Southeastern Anatolia Region in Turkiye, covering more than 13K fields. These cotton fields are heavily dependent on irrigation due to the low rainfall and high evaporation rates in the area. The data set contains multiscale biophysical and geophysical characteristics derived from high spatial resolution Sentinel-1 (S1) backscatter data, ERA5-Land meteorological data, and high resolution (30 m) soil type data [2]. All time frequencies of dynamic features were matched with the target variable temporal resolution; 16 days, through the cotton phenology, which is from April to November, and merged with static features duplicated along the time domain to form the training dataset. ET data were extracted from the EEFlux database to develop forecasting models for long-term irrigation planning and water resource management, ensuring that cotton crops do not experience water stress, which is critical to maintaining optimal growth and yield. Two types of deep learning models both handling sequential data; Long Short-Term Memory (LSTM) and Transformer architectures were applied to predict future ET values from the EO-based time series data. Both models capture temporal dependencies and generate accurate forecasts, while the Transformer model, utilizing attention mechanisms, learns broader contextual relationships to provide more precise predictions. In the study, 20% of the total cotton fields are allocated for testing not only the mentioned sequential models but also for the SAR2ET model [1]. In particular, the cotton fields that the SAR2ET model has not encountered during either validation or testing are targeted to ensure unbiased performance evaluation. In this way, this study aims to form the basis for data-driven ET estimation using Sentinel-1 images, while highlighting the important EO modalities. [1] S. Cetin, B. Ülker, E. Erten and R. G. Cinbis, "SAR2ET: End-to-End SAR-Driven Multisource ET Imagery Estimation Over Croplands," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 17, pp. 14790-14805, 2024, doi: 10.1109/JSTARS.2024.3447033. [2] Xuemeng Tian, Sytze de Bruin, Rolf Simoes et al. Spatiotemporal prediction of soil organic carbon density for Europe (2000--2022) in 3D+T based on Landsat-based spectral indices time-series, 23 September 2024, PREPRINT (Version 1) available at Research Square [https://doi.org/10.21203/rs.3.rs-5128244/v1]
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite canopy water content from Sentinel-2, Landsat-8 and MODIS

Authors: Hongliang Ma, Marie Weiss, Daria Malik, Béatrice Berthelot, Dr Marta Yebra, Rachel Nolan, Dr Arnaud Mialon, Jiangyuan Zeng, Håkan Torbern Tagesson, Xingwen Quan, Dr Albert Olioso, Frederic Baret
Affiliations: INRAE, UMR1114, EMMAH, Magellium, Fenner School of Environment & Society, Australian National University, School of Engineering, Australian National University, Hawkesbury Institute for the Environment, Western Sydney University, Centre d'Etudes Spatiales de la Biosphère (CESBIO), Université de Toulouse (CNES/CNRS/INRAE/IRD/UPS), State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Department of Physical Geography and Ecosystem Science, Lund University, School of Resources and Environment, University of Electronic Science and Technology of China
This study proposes a unified algorithm for CWC mapping at both decametric and coarse spatial resolution from several widely used optical satellites. Similarly to the algorithm implemented in the SNAP toolbox used to derive LAI and fAPAR from SENTINEL-2, we trained Artificial Neural Networks (ANN) with PROSAIL radiative transfer model simulations. The algorithm was improved to better assess the representativeness of the simulations through a better parameterized distribution of the canopy and vegetation input variables (i.e., leaf traits and soil background) of the PROSAIL model. We relied on the largest open integrated global plant (TRY) and soil spectral (OSSL) databases. We used directly the kernel density estimation (KDE) method to approximate the probability density function (PDF) of each leaf trait in TRY. We also reduced the dimension of the OSSL database (around 35 000 spectra) to avoid some over-representation of similar soil spectra by using the soil brightness concept. We found that 47 soil spectra were sufficient to well represent the range of spectral shapes within a good accuracy. We also stabilized the algorithm prediction by computing the median value of a series of 12 ANNs trained with the same dataset, thus regularizing the inversion process. We found little impact of diverse band combinations as well as the inclusion of optical indices for CWC estimation. The performances of this algorithm were first evaluated at decametric resolution based on ground measurements distributed over five ground campaigns corresponding to diverse climate and biome types. The retrieved CWC from Sentinel-2 and Landsat-8 exhibits satisfying performance, with coefficient R of 0.81 and RMSE of 0.046 g/cm2. We then evaluated CWC at 500m resolution from MODIS by comparing it with Landsat-8 and Sentinel-2 aggregated values over a globally distributed selection of LANDVAL sites, representative of the existing biome types combined with a range of precipitation, soil moisture and vegetation density conditions. The MODIS CWC global maps show reasonable seasonal and spatial patterns compared to multi-frequencies microwave-based VOD, and obvious improvements compared to the conventionally and extensively used optical indices such as NDWI. Despite the satisfying results obtained in this study (e.g. spatio-temporal behavior and direct validation exercise although limited to the several available sites), three main issues can still be identified (i) the representativeness of the training database (e.g possible bias in TRY and OSSL spatio-temporal sampling, co-distribution of PROSAIL input variables) (ii) the accuracy and footprint of the ground measurements, and (iii) the saturation effect for dense canopies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Terrestrial water and energy flux dynamics: HOLAPS framework insights during extreme heat events

Authors: Almudena García-García, Jian Peng
Affiliations: Helmholtz-zentrum Für Umweltforschung Gmbh - Ufz, Remote Sensing Centre for Earth System Research, Leipzig University
Accurately understanding the interactions between the land surface and the atmosphere, specifically the exchange of energy and water fluxes, is essential for predicting changes in climate extremes such as heatwave and precipitation extremes. Traditional methods, such as the eddy covariance approach, are widely used to observe these fluxes but are constrained by issues such as limited spatial and temporal coverage. Similarly, satellite-derived soil moisture (SM) products are extensively used at larger scales but are restricted to surface soil layers and lack detailed vertical information. Integrating remote sensing data with physical modeling offers a promising approach to enhance data coverage and resolution while addressing the need for complete energy and water flux estimates. This study explores the use of the High resOlution Land Atmosphere Parameters from Space (HOLAPS) framework, which applies remote sensing data to generate high-resolution, hourly estimates of energy and water fluxes across Europe. HOLAPS outputs, including evapotranspiration (ET), sensible heat flux (H), and soil moisture, were compared with FLUXNET measurements and water balance-derived data. The framework's performance was also benchmarked against existing satellite-based products. The findings demonstrate that HOLAPS performs comparable or better than other available products, particularly during summer months and under hot weather conditions. HOLAPS shows strong potential for applications in land management, including agriculture and forestry, due to its ability to provide consistent, long-term estimates with high spatial and temporal resolution. Additionally, it offers a robust tool for advancing research on land–atmosphere interactions by leveraging Earth observation data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ensemble irrigation modeling with AquaCrop v7.2 in NASA’s Land Information System, verified using in situ and satellite observations

Authors: Louise Busschaert, Prof. Gabriëlle J. M. De Lannoy, dr. Sujay V. Kumar, dr. Martha Anderson, Michel Bechtold
Affiliations: Department of Earth and Environmental Sciences, KU Leuven, Hydrological Science Laboratory, NASA Goddard Space Flight Center, Agricultural Research Service, Hydrology and Remote Sensing Laboratory, US Department of Agriculture
Irrigation in agriculture represents the largest component of anthropogenic water use, a demand expected to increase under a changing climate and growing population. Despite its critical importance, accurately estimating irrigation (at fine spatial and temporal scales) remains a significant challenge. Previous research has explored the use of satellite remote sensing, modeling, or a combination of both to approximate the irrigation water usage, from field to regional levels. While irrigation modeling can offer estimates at all times and locations it relies on assumptions and parameters that typically vary in space and time, and are ultimately farmers’ decisions. Furthermore, even with an optimally parametrized model, there is still a large uncertainty in the model input, such as the meteorological forcings. Therefore, this research explores the potential of ensemble modeling to better constrain the uncertainty of irrigation estimates. The ensemble is generated by perturbing (1) the meteorological forcings (radiation, precipitation), and (2) selected irrigation parameters, such as the irrigation threshold and time interval between irrigation events. This study leverages the integration of AquaCrop v7.2, the latest version of the Food and Agriculture Organization (FAO) crop growth model, into NASA’s Land Information System (LIS). The integration of AquaCrop into the LIS framework allows to perform ensemble simulations with AquaCrop, over any domain and at any resolution. In this study, the model is run at a field-scale resolution (< 1 km²) for select regions with intense irrigation in Europe in the last decade. An ensemble verification is performed using field-level irrigation observations and satellite-based evapotranspiration retrievals. More specifically, it is evaluated if the mean model estimates and their uncertainty envelop the reference data. It is discussed how to best choose the spread in the various perturbed input variables and parameters to create a realistic ensemble of irrigation and evapotranspiration. The verification evaluates the robustness of the ensemble and is a first step towards a data assimilation system intended to estimate irrigation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Altimeter DREAMing in River Basins - Focus on Africa

Authors: Prof Philippa Berry, Dr Jerome Benveniste
Affiliations: Roch Remote Sensing (RRS), formerly ESA ESRIN
The role of satellite altimetry in measuring river and lake heights is well-established, with data forming vital inputs to river basin models. However, additional information is encoded in altimeter echoes over these surfaces. In order to exploit this resource, DRy EArth Models (DREAMs) were developed. Originally created to investigate satellite altimeter backscatter from desert and semi-arid terrain, DREAMs have now been crafted over river basins, using multi-mission satellite data and ground truth to model the response of a completely dry surface to Ku band nadir illumination. The first DREAM with significant hydrological content was created over the Kalahari desert, including the Okavango river basin. DREAMcrafting was then attempted over the Congo and Amazon basins. Comparing the Congo basin DREAM with independent data (Dargie et al., 2017) revealed a wealth of DREAM surface hydrological information. It was realised that these DREAMs could be used to assess and interpret altimeter data over rivers and wetlands. Detailed masks have now been generated from the DREAMs to classify pixels as lake/river, wetland/seasonally flooded and soil/rock surface types to facilitate altimeter data analysis. For example, in the latest Congo model, 30 - 35% of the DREAM is identified as wetland/seasonally inundated surface (depending on mask classification criteria) with 14 - 18% as rivers. This paper seeks to answer the following questions: 1) What can DREAMing add to our information store in river basins? 2) How effectively do current and previous altimeter missions recover height and backscatter data over these river basins? 3) What proportion of the overflown river basin surfaces must be monitored to optimise retrieval of these data over rivers and wetlands? New DREAMs over Africa now extend the coverage, encompassing more than 30 river basins including the Congo, Niger, Okavango, Zambezi and Volta. DREAMs have also been crafted over parts of Australia, the Amazon basin and Arabia. As over 85% of Africa has now been DREAMed, this paper focuses on African rivers, lakes and wetlands, showcasing multiple river basins in a range of surface conditions. Envisat, ERS-1/2, Jason-1/2, CryoSat-2 and Sentinel-3A/B altimeter data were utilised in this study, together with a database of over 86000 graded altimeter River and Lake height time series. The recently developed puddle filter (created to filter out altimeter echoes where small puddles of surface water ‘contaminate’ altimeter soil moisture estimates) is found to show clear temporal patterns mirroring local rainfall or river height changes. Altimetry presents a unique information source in this regard in rainforest areas, as the nadir reflection is dominated by the ground return. Very detailed DREAM models are required to capture the intricate structure in river basins. It is noted that smaller tributaries in major river basins are below the current 10 arc second spatial resolution of the DREAMs, and are classified with their surrounding terrain as wetland pixels. Within the constraints of satellite orbit and repeat period, data can be successfully gathered over the majority of these overflown DREAM surfaces. The highest altimeter data retrieval rate over river basin DREAMs for all missions, for all areas where data were gathered, is found over ‘river’ and ‘wetland’ pixels, with lower percentages over ‘soil’ pixels. This is an expected outcome, as targeting ‘soil’ pixels selects for rougher topography. Of prior missions, Envisat performed best, recovering data from a high proportion of river, lake and wetland surfaces even in rough terrain; ERS1 and ERS2 were also very successful. For current missions, the Sentinel-3A/3B OLTC masks are found to preclude monitoring of the vast majority of ‘soil’ pixels over all DREAMs. Of substantive concern, the majority of wetland surfaces and smaller tributaries are also excluded. For example, in the Congo basin the current DREAM shows that monitoring is required from 48-49% of the overflown surface to acquire wetland and river data, with an additional requirement to monitor ‘puddles’. The ability of nadir-pointing altimeters to penetrate vegetation canopy gives a unique perspective in rainforest areas. Along-track time series of surface inundation and also of soil moisture can be generated at the spatial resolution of the underlying DREAMs, currently 10 arc seconds. The major constraint, as with altimeter height measurements, is the spatio-temporal sampling, so use is envisaged in combination with other remote sensed and in-situ data. The monitoring capabilities of the current generation of SRAL altimeters are not being fully realised over inland water due to critical constraints on the OLTC masks. In this era of climate change, the observation strategy should be focussed towards global monitoring. Evolving climate patterns and changing user requirements in river basins can alter monitoring priorities in unforeseen ways, and timeseries of prior measurements are essential to provide baseline data. Dargie, GC; Lewis, SL; Lawson, IT; Mitchard, ET; Page, SE; Bocko, YE; Ifo, SA; (2017) Age, extent and carbon storage of the central Congo Basin peatland complex. Nature , 542 pp. 86-90. 10.1038/nature21048
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Development of a high resolution European Drought Monitor

Authors: Pallav Kumar Shrestha, Prof. Dr. rer. nat. habil. Luis Samaniego, Dr. Ehsan Modiri
Affiliations: Helmholtz Centre For Environmental Research - UFZ
Droughts are the costliest of natural disasters in Europe contributing to loss of 621 million euros per drought event. In 2022-2023, globally 23 countries declared drought emergency including eight from Europe (UNCCD, 2023). Besides cutting crop yields (e.g., Germany's 2018 drought: 16% loss), droughts fuel wildfires and heatwaves. The ability to monitor, model and forecast and/or predict the occurrence of droughts seamlessly, along several scales in space (1 km to 25 km) and time (weeks to seasons to decades), constitutes one of the great challenges in European hydro-meteorological sciences. We respond with the European Drought Monitor (EDM) with high resolution (up to 1 km) and latency of few days. The EDM is based on the precursor system of the German Drought Monitor (Zink et al., 2016; Boeing et al., 2022) and previous scientific demonstrations and analyses at European level (Thober, Kumar, Sheffield, et al., 2015; Luis Samaniego, Kumar, et al., 2017; Thober, Kumar, Wanders, et al., 2018; Wanders et al., 2019; L. Samaniego et al., 2018; Luis Samaniego, Thober, et al., 2019; Rakovec et al., 2022). The system constitutes of a single, continental modeling domain of the Europe employing the mesoscale hydrological model (mHM, https://mhm-ufz.org) incorporating major European reservoirs and a new irrigation module. Advanced earth observation (EO) products are used in conjunction with downscaled 1 km ERA5-land, both bias corrected using EMO-1, to cut the latency of meteorological forcings for near-real time initialization of the hydrological model. Furthermore, EOs are used in the estimation of irrigation demand and command area as well as in validation of model output including evaporation (CGLS at 1 km), soil moisture (ESA CCI SM v08.1 at 25 km), snow water equivalent (SMOS at 50 km), and total water storage (GRACE-FO at 100 km). Once the reliability of the system is demonstrated, the EDM generates drought indicators such as soil moisture index (SMI) and heat wave index (HWI) as proxies to impact on irrigation for the end-users. For reproducibility, the EDM backend is powered by ecFlow, the workflow management system developed by ECMWF. The comprehensive EO augmentation demonstrated in the EDM reverberates with the Sendai Framework for Disaster Risk Reduction, which, among others, calls for improved monitoring of hazards using EO data. With the EO integration, we expect the operationalized EDM to generate ``timely'' warning and help us to decipher novel solutions to the challenge of the evolving European droughts. References Boeing, Friedrich, Oldrich Rakovec, Rohini Kumar, Luis Samaniego, Martin Schr¨on, Anke Hildebrandt, Corinna Rebmann, Stephan Thober, Sebastian M¨uller, Steffen Zacharias, et al. (Oct. 2022). “High-Resolution Drought Sim- ulations and Comparison to Soil Moisture Observations in Germany”. In: Hy- drology and Earth System Sciences 26.19, pp. 5137–5161. issn: 1027-5606. doi: 10.5194/hess-26-5137-2022. (Visited on 11/29/2024). Rakovec, Oldrich, Luis Samaniego, Vittal Hari, Yannis Markonis, Vojtˇech Moravec, Stephan Thober, Martin Hanel, and Rohini Kumar (2022). “The 2018–2020 Multi-Year Drought Sets a New Benchmark in Europe”. In: Earth’s Future 10.3, e2021EF002394. issn: 2328-4277. doi: 10.1029/2021EF002394. (Vis- ited on 07/11/2023). Samaniego, L., S. Thober, R. Kumar, N. Wanders, O. Rakovec, M. Pan, M. Zink, J. Sheffield, E. F. Wood, and A. Marx (2018). “Anthropogenic Warming Exacerbates European Soil Moisture Droughts”. In: Nature Climate Change 8.5, pp. 421–426. issn: 17586798. doi: 10.1038/s41558-018-0138-5. Samaniego, Luis, Rohini Kumar, Stephan Thober, Oldrich Rakovec, Matthias Zink, Niko Wanders, Stephanie Eisner, Hannes M¨uller Schmied, Edwin Su- tanudjaja, Kirsten Warrach-Sagi, et al. (2017). “Toward Seamless Hydrologic Predictions across Spatial Scales”. In: Hydrology and Earth System Sciences 21.9, pp. 4323–4346. issn: 16077938. doi: 10.5194/hess-21-4323-2017. Samaniego, Luis, Stephan Thober, Niko Wanders, Ming Pan, Oldrich Rakovec, Justin Sheffield, Eric F. Wood, Christel Prudhomme, Gwyn Rees, Helen Houghton-Carr, et al. (Dec. 2019). “Hydrological Forecasts and Projections for Improved Decision-Making in the Water Sector in Europe”. In: Bulletin of the American Meteorological Society 100.12, pp. 2451–2472. issn: 0003-0007, 1520-0477. doi: 10.1175/BAMS-D-17-0274.1. (Visited on 05/27/2024). Thober, Stephan, Rohini Kumar, Justin Sheffield, Juliane Mai, David Sch¨afer, and Luis Samaniego (Dec. 2015). “Seasonal Soil Moisture Drought Prediction over Europe Using the North American Multi-Model Ensemble (NMME)”. In: Journal of Hydrometeorology 16.6, pp. 2329–2344. issn: 15257541. doi: 10.1175/JHM-D-15-0053.1. (Visited on 01/18/2022). Thober, Stephan, Rohini Kumar, Niko Wanders, Andreas Marx, Ming Pan, Oldrich Rakovec, Luis Samaniego, Justin Sheffield, Eric F. Wood, and Matthias Zink (Jan. 2018). “Multi-Model Ensemble Projections of European River Floods and High Flows at 1.5, 2, and 3 Degrees Global Warming”. In: Environmental Research Letters 13.1, p. 014003. issn: 17489326. doi: 10.1088/ 1748-9326/aa9e35. (Visited on 01/18/2022). UNCCD (2023). Global Drought Snapshot 2023: The Need for Proactive Action. Tech. rep. United Nations Convention to Combat Desertification. Wanders, Niko, Stephan Thober, Rohini Kumar, Ming Pan, Justin Sheffield, Luis Samaniego, and Eric F. Wood (Jan. 2019). “Development and Eval- uation of a Pan-European Multimodel Seasonal Hydrological Forecasting System”. In: Journal of Hydrometeorology 20.1, pp. 99–115. issn: 15257541. doi: 10.1175/JHM-D-18-0040.1. (Visited on 10/01/2021). Zink, Matthias, Luis Samaniego, Rohini Kumar, Stephan Thober, Juliane Mai, David Schafer, and Andreas Marx (2016). “The German Drought Monitor”. In: Environmental Research Letters 11.7. issn: 17489326. doi: 10 . 1088 / 1748-9326/11/7/074002.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite-based optical characterization of a RAMSAR lagoon in Argentina

Authors: Sofía Paná, Francisco Nemiña, Dr Nicola Ghirardi, Mariano Bresciani, Claudia Giardino, Matias Bonansea, Inés del Valle Asís, Anabella Ferral
Affiliations: Mario Gulich Institute for Advanced Space Studies, Centre for Research and Studies on Culture and Society (CIECS), National Scientific and Technical Research Council (CONICET), Córdoba National University (UNC), Argentina's Space Activities Commission (CONAE), CNR–Institute for Electromagnetic Sensing of the Environmental, CNR–Institute of BioEconomy, Institute of Earth Sciences, Biodiversity and Environment (ICBIA; CONICET-UNRC), Department of Geology, Faculty of Exact, Physical-Chemical and Natural Sciences, National University of Río Cuarto (UNRC)
Located in the province of Cordoba (Argentina), the Mar Chiquita Lagoon is one of the largest saltwater lakes in the world, characterized by exceptional salinity and intense climate conditions, such as salt storms. The lagoon, along with the northern coastal areas and the estuaries of its main tributaries (Suquía and Xanaes Rivers), are included in the Ansenuzas National Park and, together with the Dulce River wetlands, has been classified as a Wetlands of International Importance by the RAMSAR - Convention on Wetlands of International Importance Especially as Waterfowl Habitat. The Mar Chiquita Lagoon is currently threatened by human activities like agriculture and urbanization within its catchment area. Additionally, the region is identified as a global hotspot for the impacts of climate change. Remote sensing tools are particularly valuable for the study of water resources, enabling continuous monitoring that allow researchers to track seasonal variations and long-term trends in water quality. These data are essential for understanding the dynamics of water bodies and ecosystems in response to environmental change. This study investigates the potential of three satellite sensors for classifying water types in the Mar Chiquita Lagoon by implementing a water classification system based on optical property parameters. The Optical Water Type (OWT) classifications aim to manage optical complexity by identifying appropriate ocean colour algorithms tailored to each water type, facilitating the understanding of biogeochemical cycles from local to global scales. Two complementary methods were used to achieve this objective. First, a Principal Component Analysis (PCA) was performed on the full set of spectral bands for Sentinel 2 (S2), Sentinel 3 (S3) and PACE to identify the general optical characteristics of the lagoon. An unsupervised classification (K-means) was performed on the basis of the PCA results. Finally, an OWT classification was applied to the resulting clusters. The effectiveness of these methods was evaluated using S2, S3 and PACE satellite data acquired on 18 April 2024, to test their applicability over different spatial and spectral scales. The results revealed that the lagoon can be spectrally classified into three distinct zones: an initial zone influenced by the Suquía and Xanaes rivers, a mixing zone and a final zone influenced by the Dulce river. The k-means cluster analysis identified three homogeneous zones for S3, while four homogeneous zones were detected for S2 and PACE. These zones were further classified into OWT categories 5A and 5B. The study highlights the importance of satellite-based remote sensing tools in monitoring water quality in the Mar Chiquita Lagoon. The employed techniques also provide insights into the zones of influence of each river, analyzed across varying temporal and spatial scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: RainGNSS: an In-Situ Network for Altimetry, Water Vapor and Precipitation Validation of Satellite-Based Observations.

Authors: Bruno Picard, Julianna Devillers, Jean-Christophe Poisson, Yannick Riou, Valentin Fouqueau
Affiliations: Fluctus Sas, vorteX-io
Accurate in-situ rainfall data and dense river monitoring networks are crucial for managing the increasing frequency of extreme weather events such as floods and droughts, driven by climate change. These measurements enable precise, high-resolution datasets critical for predicting localized hydrological phenomena, improving early warning systems, and validating satellite observations. Using Global Navigation Satellite System (GNSS) signals to estimate water vapour and precipitation is a promising application for meteorology and hydrology. The RainGNSS project uses VorteX.io Micro-Stations equipped with low-cost precise GNSS receivers to provide Zenith Total Delay (ZTD) and precipitable water vapor (PWV), estimations. Distributed along rivers, these stations enable continuous high-frequency measurements and offer a complementary product for monitoring flash floods and validating satellite observations, particularly for the SWOT mission. The addition of ZTD-derived rainfall estimation complements existing hydrological measurements (water surface elevation, water surface velocimetry, water surface temperature), enhancing flood prediction capabilities. GNSS data processing is based on a chain using RTKLIB and the Precise Point Positioning (PPP) method, adapted to low-cost GNSS receivers. ZTD retrieval relies on a precise troposphere model (Saastamoinen) for the dry component and the retrieval of the wet component as an unknown parameter. We present the developments to refine the accuracy and dynamics of the data. Then, we discuss the performance of the GNSS-derived rainfall feature and its validation against independent dataset such as ECMWF analysis and measurements from a Davis Vantage Vue weather station. Finally, we discuss the generalization of the approach to a larger network and its validation against weather radar precipitation. This project represents a step towards cost-effective, scalable river monitoring networks capable of providing critical data for flood risk management and climate resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Empirical Orthogonal Function (EOF) Analysis of Water Vapor Data from GPS and MODIS

Authors: İlke Deniz
Affiliations: Zonguldak Bülent Ecevit University
Water vapor is the most abundant natural greenhouse gas in the atmosphere. The distribution of water vapor in the atmosphere shows a high degree of temporal and spatial changes. Current studies include EOF analysis, modeling, propagation characteristics, trends of dynamic structures such as the atmosphere, and their application in data filtering. Furthermore, the "signal" and "noise" in the time series may be reliably identified using EOF analysis. It is also possible to ascertain the precision of time series by using this characteristic. The patterns of the important principal components can be examined for their physical meanings. In this study, the MODIS and GPS water vapor data from 10 Tusaga-Active stations located in the Western Black Sea Region are used. The data consists of twice daily observations between 16 February and 23 April 2023. Water vapor time series of MODIS, GPS, and the difference of the GPS-MODIS is evaluated by EOF analysis. Based on the EOF analysis of the MODIS, GPS, and the difference of the GPS-MODIS time series, the distributions of the "residuals" computed for each station are consistent with each other. The significant principal component PC1's variance ratio, as determined by the EOF analysis of MODIS data, was 0.29. The root mean square error (RMSE) was found to be ± 4.25 mm with 95% reliability. Moreover, the variance ratio of the significant basic component PC1 was 0.83 with the EOF analysis of GPS data. With 95% reliability, the RMSE was determined to be ± 1.31 mm. As for the EOF analysis of the difference of the GPS-MODIS time series, the variance ratio of the significant basic component PC1 was 0.41, and the RMSE was found to be ± 4.54 mm with 95% reliability. The precisions are consistent with the findings of Xu and Liu (2024) and Zhu et al. (2021). The significant principal components of the MODIS, GPS, and GPS-MODIS time series are shown to have patterns that are compatible with the test area's topography.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A New Upper Tropospheric Humidity Dataset Based on Passive Microwave Sounders

Authors: Dr Elizabeth Good, Philip Whybra, Rob King, Stephen
Affiliations: Met Office
Version 2 (v2.0) of the global Upper Tropospheric Humidity (UTH) dataset, produced within the framework of the EUMETSAT Satellite Application Facility for Climate Monitoring (CM SAF), is presented. This climate data record provides a time series of estimates of UTH derived from passive microwave (MW) sounders. UTH contributes significantly to the atmospheric greenhouse effect by having a strong influence on the outgoing longwave radiation, despite the smaller concentration by mass in comparison to the lower troposphere. The CM SAF UTH v2.0 data set is based on data from twelve MW sounders operating at 183 GHz in polar orbit that are combined into a single time series covering the period 6 July 1994 to 31 December 2018. It is accessible via the CM SAF website (https://www.cmsaf.eu/EN/Home/home_node.html). The MW sounders include the Special Sensor Microwave - Humidity (SSM/T-2) onboard the DMSP-F[11, 12, 14, 15], the Advanced Microwave Sounding Unit B (AMSU-B) onboard NOAA-[15, 16, 17], the Microwave Humidity Sounder (MHS) onboard NOAA-[18,19] and Metop-[A, B], and the Advanced Technology Microwave Sounder (ATMS) onboard SNPP. The CM SAF UTH v2.0 is a near-global 1°x1° latitude-longitude dataset that is available at both hourly and daily time steps. Observations from the twelve different sensors have been assigned to the nearest Coordinated Universal Time (UTC) hour and therefore hourly observations are not available at all hours for all grid cells. Where observations from more than one sensor for a single grid cell are available, a priority ordering is applied where best-performing sensor is used. This priority ordering is ascertained by evaluating each sensor independently against a reference dataset based on the ERA5 reanalysis. The daily product is derived from the mean of all available hourly UTH observations on that day, where there are a minimum of two hourly observations. For both the hourly and daily products, uncertainty components that capture sources of independent (or random), structured (or locally correlated) and common (or systematic) errors in the data are also provided for each grid cell. These uncertainties have been propagated from the input MW top of atmosphere brightness temperatures and through the UTH retrieval; uncertainties due to the retrieval itself are also included in these uncertainty components. In addition, an estimation of the spatial sampling uncertainty is provided. Invalid observations affected by deep convective or precipitating clouds, and/or radiation emitted from the surface, together with any spurious observations from individual MW sounders, have been removed from the data set. The UTH provided typically represents a broad atmospheric layer between 500 and 200 hPa. However, the exact height of this layer depends on the atmospheric conditions at the time of the observation. An optional fixed layer approximation adjustment is supplied that users can apply to provide an estimated mean relative humidity (RH) between ±60° latitude for a fixed layer between 500 and 200 hPa (mean_RH). However, users are advised to take care using this correction, especially outside of the tropics where the mean_RH is of lower quality. Users are also advised to take care using UTH observations above ±60° latitude as the retrieval is sometimes less reliable at high latitudes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The use of EO-derived irrigation maps to assess irrigation impacts on water availability of the Rhine basin

Authors: Devi Purnamasari, Albrecht Weerts, Willem van Verseveld, Ryan Teuling, Joost Buitink, Brendan Dalmijn, Frederiek Sperna Weiland
Affiliations: Deltares, Wageningen University
The Rhine Basin has experienced summer droughts that led to concerns about water availability and increased irrigation water demand. However, in temperate basins like the Rhine, high-resolution maps of irrigated areas are often lacking. Here, as part of the HorizonEurope project STARS4Water, we developed irrigated areas in the Rhine basin at a 1 km resolution for water availability assessment using a hydrological model. This approach uses the difference in Land Surface Temperature (LST) between two modelled (without irrigation) and observed LST from the Moderate Resolution Imaging Spectroradiometer (MODIS). These LST differences provide distinct features for classification using a random forest algorithm by excluding primary evapotranspiration from precipitation. The irrigated area maps were evaluated against national agricultural statistics and compared with existing maps of irrigated areas. The result of irrigation maps provides insights into the interannual variability of irrigated cropland extent and location in the Rhine Basin. Subsequently, the derived irrigation map are used in wflow_sbm for assessing and modeling agricultural water use in the region. The wflow_sbm model now with irrigation also includes other water usage (e.g. domestic, industrial, livestock) . Finally, we compare the hydrological fluxes and state variables of wflow_sbm with and without water usage (include irrigation) against point and EO based observations such as discharge, total water storage, and LST.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Exploring the potential of sub-daily microwave remote sensing observations for estimating evaporation

Authors: Emma Tronquo, Susan Steele-Dunne, Hans Lievens, Niko E.C. Verhoest, Diego G. Miralles
Affiliations: Hydro-Climate Extremes Lab (H-CEL), Ghent University, Department of Geoscience and Remote Sensing, Delft University of Technology
Evaporation (E) plays a key role in the terrestrial water, energy, and carbon cycles, and modulates climate change through multiple feedback mechanisms. Its accurate monitoring is thus crucial for water management, meteorological forecasts, and agriculture. However, traditional in situ measurements of E are limited in terms of availability and spatial coverage. As an alternative, global monitoring of E using satellite remote sensing, while indirect, holds the potential to fill this need. Today, different models exist that yield E estimates by combining observable satellite-based drivers of this flux, but typically work at daily or oven monthly time scales. As natural evaporation processes occur at sub-daily resolution, there is a need to estimate evaporation at finer temporal scales to capture the diurnal variability of this flux and to monitor water stress impacts on transpiration. Likewise, interception loss shows high intra-day variability, mainly concentrated during precipitation events and shortly after. Moreover, the moisture redistribution within the soil–plant–atmosphere continuum as a consequence of transpiration is highly non-linear and has a strong daily cycle. Sub-daily microwave data could inform about these short-term processes, and as such improve process understanding and monitoring of E and its different components, while providing all-skies retrievals. The Sub-daily Land Atmosphere INTEractions (SLAINTE) mission, an ESA New Earth Observation Mission Idea (NEOMI), aims to provide sub-daily SAR observations of soil moisture, vegetation optical depth (VOD) and wet/dry canopy state, enabling a more accurate estimation of E and the potential to advance E science beyond its current boundaries. This study investigates the potential value of future SLAINTE observations for improving the estimation of E at four eddy covariance sites. In this regard, Observing System Simulation Experiments (OSSEs) are assembled. In total, three experiments using synthetic microwave observations are implemented, focusing on the role of (1) sub-daily surface soil moisture in improving bare soil evaporation and transpiration estimates, (2) sub-daily VOD in improving transpiration estimates, and (3) sub-daily microwave observations that inform about the wetness state of the canopy, to address the uncertainties related to rainfall interception loss. The Global Land Evaporation Amsterdam Model (GLEAM; Miralles et al., 2011) is used for the simulations. GLEAM is a state-of-the-art E model that estimates the different E components (mainly transpiration, bare soil evaporation, and interception loss) using satellite data, including microwave observations of surface soil moisture and VOD. The model is here adapted to work at sub-daily resolution. The results of the OSSEs illustrate that prospective sub-daily microwave data have the ability to improve the estimation of E and its separate components, even if based on current-generation E models, and highlight the need for satellite missions providing sub-daily microwave data, like SLAINTE, to better comprehend the flow of water in ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Observation-Based Evaluation of Anthropogenic Land- and Water-Use Scenarios in Regional Water Budgets over Europe

Authors: Benjamin D. Gutknecht, Jürgen Kusche, Francis Lopes, Jane Roque Mamani, Yikui Zhang
Affiliations: Collaborative Research Centre 1502 DETECT, Institute of Geodesy and Geoinformation, University of Bonn, Institute of Geosciences, Meteorology Section, University of Bonn, Institute of Bio- and Geosciences, Agrosphäre (IBG-3), Forschungszentrum Jülich, Centre for High-Performance Scientific Computing in Terrestrial Systems, Geoverbund ABC/J
There is wide consent about the fact that increased atmospheric greenhouse gas concentrations lead to considerable change in the Earth’s climate system and, thus, to an intensification of the water cycle. At regional scales, however, also other human actions have recently been found to be of significance in this regard. Decades of anthropogenic land-use change and increased water-use have impacted the regional water- and energy-cycle also across the atmospheric-terrestrial boundary. In order to strengthen our understanding of the objective quantities, a series of different reanalyses, free-run and coupled model scenarios over the EURO-CORDEX region have been developed in the framework of the Collaborative Research Cluster (CRC) 1502 DETECT. These include the Terrestrial Systems Modelling Platform (TSMP) with COSMO, ICON, (e)CLM and ParFlow, and assume systematic forcing-parameter variations and different irrigation- and land-use patterns. Here, we evaluate essential climate variables such as precipitation (P), evapotranspiration (ET), terrestrial water storage change and river discharge by means of kernel-integrated monthly water mass fluxes in the terrestrial water budget equation over multiple time scales. In what may be understood as a combined consistency check based on a sound physical assumption, we compare modelled boundary net flux, i.e. P-ET, against terrestrial water storage change as observed by GRACE satellite gravimetry and observed river discharge. Hereby it is not only possible to identify model ensemble outliers and systematic offsets, but to assess the significance of differences in budget residuals between such model runs with and without human interaction. The choice of primary target regions comprises major European river catchments over the Iberian Peninsula, Western Europe, and Central Continental Europe with varying types of prevalent climate and seasonality. Our preliminary findings indicate that (1) the choice of sea surface temperature forcing in regional climate models can lead to very localised differences in boundary net fluxes, (2) varying assumptions about irrigation might have strong regionally differing significance of variability in accumulated precipitation, and (3) the inclusion of anthropogenic water use in a coupled earth model leads to season-dependent changes of multi-annual mean boundary net water flux in the order of >10 mm per month on local to regional scales (both: reduction and intensification). Moreover, we highlight challenges of the budget method stemming from spatio-temporal gaps in measured data, which further emphasises the potential and need for space-based alternatives for continuous observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Seasonal Analysis of Precipitation Partitioning Using a Storage-Adjusted Budyko Framework.

Authors: Dr. Karim Douch
Affiliations: ESA ESRIN
Global warming and land use changes are intensifying the hydrological cycle by altering water flux rates within the Earth system. This intensification manifests in many regions as more frequent extreme precipitation events and prolonged dry spells. While these changes are increasingly well-documented, an open question remains whether and how the partitioning of precipitation into runoff and evaporation is shifting. The Schreiber-Oldekop hypothesis, also known as the Budyko framework, offers a model for understanding the long-term evolution of this partitioning. In this framework, the evaporative index (E/P), representing the evaporation (E) to precipitation (P) ratio at the basin scale, is modelled as a function of the aridity index (Ep /P), defined as the potential evaporation (Ep) to precipitation ratio. Traditionally, this relationship is derived using annual or longer-term averages of these variables to minimize the impact of water storage changes (dS/dt), assuming 1 – Q/P ≈ E/P, where Q is the total basin outflow. However, this annual averaging may cover up potential divergent dynamics in the seasonal partitioning of water between evaporation and runoff. In this study, we address this gap by conducting a seasonal analysis of the Schreiber-Oldekop hypothesis across 46 basins worldwide, utilizing Earth observation data. At the seasonal scale, water storage changes (dS/dt) become a significant component of the water balance equation and are treated as a source or sink of water in addition to precipitation. These storage changes are estimated using terrestrial water storage anomaly time series derived from GRACE and GRACE-FO observations over the last two decades. Our methodology comprises two main steps. First, we construct consistent time series for precipitation, evaporation and water storage change (dS/dt) for each basin, spanning 1995–2023. This involves filling gaps and extending GRACE(-FO) time series back to 1995 while improving water mass balance closure. Second, depending on the basin’s climatic zone, we segment the time series into two (dry and wet) or more seasons, each covering at least three months. The modified Budyko framework is then applied to analyse the partitioning dynamics for each season.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.07.07 - POSTER - Advancements in Observation of Physical Snow Parameters

Comprehensive quantitative observations of physical properties of the seasonal snow cover are of great importance for water resources, climate impact and natural hazard monitoring activities. This has been emphasized by the Global Energy and Water EXchanges (GEWEX) (a core project of the World Climate Research Programme (WCRP)) for decades and highlighted by the Global Precipitation Experiment (GPEX) (a WCRP Light House Activity) launched in October 2023. Satellite-based observation systems are the only efficient means for obtaining the required high temporal and spatial coverage over the global snow cover. Due to the sensitivity to dielectric properties and penetration capabilities, SAR systems are versatile tools for snow parameter observations. Significant advancements have been achieved in SAR based retrieval algorithms and their application for operational snow cover monitoring. Additionally, lidar backscatter measurements has been proven to provide accurate observations on snow height and its changes. However, there is still need for improvement of snow cover products, addressing physical parameters such snow depth, SWE, liquid water content, freezing state and snow morphology. In this session the current status of physical snow cover products will be reviewed and activities towards further improvements will be presented, taking into account satellite data of current and future satellite missions. In this context, a broad range of observation techniques is of interest, including methods based on backscatter intensity, polarimetry, interferometry, tomography, as well as multi-frequency and multi-sensor approaches.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An intercomparison exercise of Snow Cover Area maps from high-resolution Earth Observation over the Alps

Authors: Federico Di Paolo, Matteo Dall'Amico, Stefano Tasin
Affiliations: Waterjade Srl
In Europe, the majority of the precipitation during winter falls as snow over 1.000 m altitude, where it accumulates and remains stored in the snowpack until the melting season, when it returns in the hydrological cycle and it is partly used for sustaining downstream water demands. Being the component of the cryosphere experiencing the largest seasonal variation in spatial extent, snow is highly connected to climate change, particularly in low- and mid-elevation mountain regions, both because changes in climate and rise in temperature can cause a decrease in winter snow amount, and because any change in snow presence and melt can directly affect water availability in snowfed basins. Associated with global warming, in the last decades climate change is driving an increase in the frequency and magnitude of hydrological extremes over the globe such as droughts and floods. In particular, droughts are one of the main water-related geophysical issues in Europe, recently showing an increase in the Mediterranean area, and impacting the entire economy at different levels. Currently, a comprehensive description and prevision of droughts is challenging, and many approaches have been developed to indicate areas exposed to droughts in the near future. Many European regions rely on water streams that originate in mountains where the snow is a dominant variable in the water cycle. In such areas snow cover estimation is one of the main indicators to predict possible drought conditions, governing many hydrology-related risks at different time horizons: i) droughts, caused by a reduced precipitation during a season, occurring over months; ii) floods, due to intense snowmelt, occurring over days; iii) avalanches, due to intense snowfalls, occurring over hours to days. In recent decades, thanks to the large and open availability of Earth Observation (EO) data, the monitoring of snow at a global scale has been enabled. Different snow-related physical variables have been retrieved from EO images, such as: (i) Snow Cover Area (SCA), extracted from multispectral sensors such as MODIS, Sentinel-2 and the Landsat constellation, having a medium to high spatial resolution (from some 10 to some 100 m); (ii) wet snow cover area, retrieved from Synthetic Aperture Radars (SARs), such as Sentinel-1, to detect the presence of wet snow; iii) Snow water equivalent (SWE), estimated from passive microwave sensors and gravity measurements; iv) Snow depth (HS), recently retrieved from Sentinel-1 SAR data at kilometer/sub-kilometer resolution and from stereo satellite imagery. The use of EO-retrieved snow parameters is vast, and the applications of such data can be addressed to diverse topics and time horizons, such as climatology, long-term snow cover variability, drought assessment snow evolution monitoring, melting dynamics, ingestion in hydrological models for snow evolution and validation of hydrological models. Due to the vast number of EO-retrieved snow products and the diverse processing approaches present, a comparison between different products is important to assess performance and limitations. The so-called Round Robin Exercise is an intercomparison method for evaluating together the performance of different products. In this work we present an intercomparison between high-resolution (i.e., from 250 to 20 m) EO-retrieved snow products. In particular, we focus on SCA products extracted from multispectral sensors (i.e., Sentinel-2 and Landsat-8) and SAR (i.e., Sentinel-1). A medium-resolution (i.e., 250 m) MODIS SCA product evaluated over the Alps is also included in the dataset for a comparison with the more resolute Sentinel/Landsat products. Since the resolution of the considered EO products varies from 20 to 250 m, the SKYE approach for the validation of the EO-retrieved snow variables versus in-situ measurements reference proposed under the SnowPEx project is used, considering the HS dataset published by Matiu et al. (2020) as a benchmark. The analysis has been carried out over the Alps, during the winter seasons 2017/2018 and 2018/2019. All the analyzed EO-retrieved SCA products showed good metrics with respect to the in-situ benchmark. Our results show that, in retrieving the SCA value from HS or Fractional Snow Cover (FSC) maps obtained through EO: - A threshold HS > 2 cm should be used, being the one providing the best metrics, and enabling a more conservative snow identification from multispectral sensors (for HS < 2 cm, the snow cover can not be uniform); - A threshold FSC > 15% or 25% should be used because providing the best metrics and being a conservative value for the identification of snow.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A New Method for Assimilating Satellite Snow Extent Data in NWP

Authors: Niilo Siljamo, Mikael Hasu, Ekaterina Kurzeneva, Laura Rontu
Affiliations: Finnish Meteorological Institute
The H SAF portfolio includes several operational snow extent products. Since 2008, the geostationary H31 (MSG/SEVIRI) product has provided snow extent data across the full SEVIRI disk. Better polar coverage has been provided by H32 (Metop/AVHRR), available from 2015. The latest addition is the H43 (MTG/FCI) snow extent, which provides daily full-disc snow extent coverage for MTG/FCI. These products are suitable for hydrological and meteorological applications, such as numerical weather prediction (NWP). The key challenge has been the assimilation of the satellite snow data into NWP models. Using these observations, particularly in autumn and spring, could enhance the Snow Water Equivalent (SWE) field in NWP models. The idea is to adjust the analysis fields by using statistical interpolation between snow barrels, SYNOP observations (weather stations), and model background. The innovative "snow barrels" are pseudo-observations of snow extent produced from the H SAF H32 (Metop/AVHRR) intermediate (single image) snow extent product. Since 2022, the FMI has provided snow barrel data to MetCoOp servers, enabling testing within the MetCoOp Ensemble Prediction System (MEPS). The MetCoOp is a collaboration between the FMI, Met Norway, SMHI, ESTEA, and LEGMC, with the goal of jointly operating a limited-area NWP model for the Nordic region. After successful development and testing, snow barrels were ready for use in NWP models and were integrated into MetCoOp’s operational production in spring 2024. Snow barrels condense snow extent data from multiple (10×10) satellite pixels into a single classification distribution. Each "barrel" contains observation time, average location, and number of classifications (snow, no snow, partial snow) in the selected 100-pixel area. For NWP purposes, this data is converted into a format suitable for assimilation, which influences the model’s snow water equivalent (SWE) field.  Since snow barrels do not directly measure SWE, a fixed snow depth range of 0–10 cm is assumed, corresponding to SWE values of 0–31 kg/m² depending on snow density. Snow density is estimated using typical monthly climatological values, which generally range from 140 to 310 kg/m³. The fraction of pixels classified as "snow" is multiplied by the maximum SWE value for a 10 cm snow depth. For example, in spring, when the average snow density is approximately 250 kg/m³ if all pixels are classified as "snow," the resulting pseudo-observation is 25 kg/m². If only half the pixels are classified as snow, the pseudo-observation is 12.5 kg/m². Barrels are primarily applied when they contradict the model’s background field. If satellite observations were used indiscriminately, the model analysis might incorrectly reduce the overall snow amount, as the barrels represent a maximum SWE of only approximately 14-31 kg/m². The winter of 2024-2025 marks the first full northern hemisphere winter in which the snow barrel approach will be operational. Snow barrels are expected to bring several benefits during this period. They will enhance the analysis of snow cover in areas with thin or patchy snow, allowing the model to more effectively track rapid changes in snow cover, even over smaller regions. This capability is especially valuable during melting periods, near coastal areas, or after fresh snowfall. Early results from this year already demonstrate improvements in the model’s snow analysis statistics, such as lower RMSE and bias. It will be interesting to observe how the improved snow field impacts other model parameters, such as temperature, over an entire winter season. However, snow barrel observations do have certain limitations. During the darkest months (November to January), data collection is not possible due to insufficient daylight at the latitudes within the model domain. For this reason, the barrels are not expected to have a significant impact before February. Nonetheless, this is not a significant disadvantage given how the barrels function. As previously mentioned, barrels detect snow cover but not snow depth, meaning that during midwinter, when the snow cover is thick and stable, the barrels provide little additional value. Next year, our NWP model cycle is likely to be updated, which will influence how snow barrel data is assimilated. Currently, snow observations are processed in units of kg/m², but in the new model setup, the unit will change to centimetres. Preliminary results from this update are expected in spring, based on our pre-operational system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Machine learning based GNSS-IR retrieval in complex terrain: Initial results for snow heights in Switzerland

Authors: Matthias Aichinger-Rosenberger, Laura Crocetti
Affiliations: Chair of Space Geodesy, Institute of Geodesy and Photogrammetry, ETH Zurich
GNSS-Interferometric Reflectometry (GNSS-IR) represents an innovative method for environmental remote sensing. The technique makes use of ground-reflected signals from Global Navigation Satellite Systems (GNSS) to sense different environmental parameters such as snow and soil moisture. These parameters belong to the essential climate variables defined by the Global Climate Observing System (GCOS) and observations are of great value for understanding the hydrological cycle and the impacts of climate change on it. However, one major limitation of the classic GNSS-IR algorithm is the restriction of its applicability to stations located in flat terrain, which makes the technique unusable for sites in mountainous areas. This is unfortunate since measurement networks are typically already sparse in these regions, which are particularly vulnerable to the effects of climate change. Thus, the ability to use existing station infrastructure in alpine regions for snow and soil moisture monitoring would be beneficial for hydrological monitoring and climate studies. In this contribution, we present the newest results from the GCOS Switzerland project “Machine-learning based Advancement and usability assessment of GNSS Interferometric Reflectometry for Climatological studies in Switzerland” (MAGIC-CH). This includes a validation of snow height products from our machine learning model as well as from the classic retrieval, by comparison to standard in-situ and satellite products. Furthermore, we showcase the potential benefits of incorporating local terrain information, in the form of a high-resolution Digital Elevation Model (DEM), in the retrieval process.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An Innovative Concept of High Spatial Resolution Measurements of Snow Depth and Snow Density from Optical Remote Sensing

Authors: Dr. Yongxiang Hu, Xubin Zeng, Parminder Ghuman, Keith Murray, Chris Edwards, Jared Entin, Craig Ferguson, Thorsten Markus, Knut Stamnes, Xiaomei Lu, Sunny Sun-Mack, Carl Weimer, Yuping Huang
Affiliations: University Of Arizona, NASA Langley Research Center, NASA Earth Science Technology Office, Code YS NASA Headquarters, Stevens Institute of Technology, BAE Systems, Inc.
Visible light travels diffusively inside snow. Such diffusive light propagation process can be formulated precisely through random walk theory. Using this theory, we have established a simple theoretical formulation that links snow depth to the diffusive photon path distribution of visible light scattered by snow particles. The theory suggests that for backscattering measurements of a space-based lidar (e.g., ICESAT-2), the averaged photon path lengths of laser light traveling inside the snow is equal to twice of the snow depths. Using Monte Carlo simulations [1] and theoretical radiative transfer modeling analysis [2], we demonstrated that this simple theory applies universally to snowpacks with different snow grain sizes, shapes, and densities. We have derived snow depths from ICESat-2 data using the theory, and the results agree with snow depth measurements in field campaign. Sunlight reflected by snow are sensitive to absorption by the snow, which is a function of wavelengths in the visible and solar infrared spectral range. Comparing with shortwave IR wavelengths, reflectance at shorter spectral wavelengths is less absorbing and more sensitive to the deeper part of the diffuse photon path lengths. Thus, averaged diffusive photon path distribution (twice of the snow depth) can also be measured by spectral reflectance of solar radiation in visible and infrared wavelengths. This talk introduces the innovative high spatial resolution snow depth and snow density measurement concept from optical remote sensing, both with lidars (active remote sensing) and with the broad swath spectral measurements of sunlight reflected by snow (passive remote sensing, trained by collocated lidar measurements) from space, such as the spectral measurements from NASA’s EMIT, PACE and SBG missions, in order to establish global snow depth data record at very high spatial resolution (e.g., up to 30 m resolution with spectral measurements of NASA’s EMIT and SBG missions). We also introduce techniques to derive snow density from the spectral reflectance and lidar measurements. We will also introduce the machine learning based synergistic lidar/spectrometer snow depth retrievals, lidar/microwave-radiometer snow depth retrievals, and a quantum annealing technique we developed for enhancing SNRs of the lidar measurements. References: [1] Hu Y, Lu X, Zeng X, Stamnes SA, Neuman TA, Kurtz NT, Zhai P, Gao M, Sun W, Xu K, Liu Z, Omar AH, Baize RR, Rogers LJ, Mitchell BO, Stamnes K, Huang Y, Chen N, Weimer C, Lee J and Fair Z (2022) Deriving Snow Depth From ICESat-2 Lidar Multiple Scattering Measurements. Front. Remote Sens. 3:855159. doi: 10.3389/frsen.2022.855159 [2] Hu Y, Lu X, Zeng X, Gatebe C, Fu Q, Yang P, Weimer C, Stamnes S, Baize R, Omar A, Creary G, Ashraf A, Stamnes K and Huang Y (2023), Linking lidar multiple scattering profiles to snow depth and snow density: an analytical radiative transfer analysis and the implications for Front. Remote Sens. 4:1202234. doi: 10.3389/frsen.2023.120223
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improving Snow Water Equivalent retrievals and our understanding of terrestrial snow mass in the ESA CCI+ Snow project

Authors: Dr. Kari Luojus, MSc. Pinja Venäläinen, Jouni Pulliainen, Matias Takala, Mikko Moisander, Lina Zschenderlein, Chris Derksen, Colleen Mortimer, Lawrence Mudryk, Thomas Nagler, Gabriele Schwaizer
Affiliations: Finnish Meteorological Institute, Environment and Climate Change Canada, ENVEO IT GmbH
Reliable information on snow cover across the Northern Hemisphere and Arctic and sub-Arctic regions is needed for climate monitoring. Warming surface temperatures during the recent decades have driven a substantial reduction in the extent and duration of Northern Hemisphere snow cover. These changes in snow cover affect Earth’s climate system via the surface energy budget and influence freshwater resources across a large proportion of the Northern Hemisphere. In contrast to snow extent, reliable quantitative knowledge on seasonal snow mass and its trend have been relatively uncertain until recent years. FMI has been working with ECCC to improve the retrieval of terrestrial snow mass using passive microwave radiometers in several ESA projects, currently within the ESA Snow CCI. The ESA Snow CCI project initiated in 2018 strives to further improve the retrieval methodologies for snow water equivalent (SWE) from satellite data and construct long term climate data records (CDRs) of terrestrial snow cover for climate research purposes. The efforts to improve satellite-based retrieval of snow water equivalent have resulted in an enhanced resolution SWE record spanning 1979-2022 with 0.1° x 0.1° spatial resolution (Venäläinen et al. 2023, Luojus et al. 2024). The retrieval applies the FMI-developed GlobSnow approach which combines satellite-based data with ground-based snow depth observations and nowadays includes dynamic snow density consideration in the retrieval process. Further, the team has updated the bias-correction approach presented in Pulliainen et al. 2020 to improve the reliability of the long-term climate data records of terrestrial snow mass. The approach is being further improved in the ESA Snow CCI project. The new SWE data record and upcoming Snow CCI datasets will improve our estimates of the satellite-era snow mass changes and trends for the Northern Hemisphere. References: Luojus, K.; Venäläinen, P.; Moisander, M.; Pulliainen, J.; Takala, M.; Lemmetyinen, J.; Derksen, C.; Mortimer, C.; Mudryk, L.; Schwaizer, G.; Nagler, T. (2024): ESA Snow Climate Change Initiative (Snow_cci): Snow Water Equivalent (SWE) level 3C daily global climate research data package (CRDP) (1979 - 2022), version 3.1. NERC EDS, (2024). https://dx.doi.org/10.5285/9d9bfc488ec54b1297eca2c9662f9c81 Pulliainen, J., Luojus, K., Derksen, C., Mudryk, L., Lemmetyinen, J., Salminen, M., Ikonen, J., Takala, M., Cohen, J., Smolander, T. and Norberg, J., “Patterns and trends of Northern Hemisphere snow mass from 1980 to 2018”. Nature 581, 294–298 (2020). https://doi.org/10.1038/s41586-020-2258-0 Venäläinen, P., Luojus, K., Mortimer, C., Lemmetyinen, J., Pulliainen, J., Takala, M., Moisander, M. and Zschenderlein, L., 2023. "Implementing spatially and temporally varying snow densities into the GlobSnow snow water equivalent retrieval", The Cryosphere, 17(2), pp.719-736. https://doi.org/10.5194/tc-17-719-2023
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Radar measurements using WBSCAT for supporting multi-frequency snow water equivalent retrieval and GEO- and LEO SAR development

Authors: Juha Lemmetyinen, Jorge Jorge Ruiz, Thomas Nagler, Charles Werner, Anna Kontu, Julia Kubanek
Affiliations: Finnish Meteorological Institute, ENVEO IT GmbH, GAMMA Remote Sensing AG, ESA ESTEC
Seasonal snow cover is a dynamic and unpredictable part of the terrestrial cryosphere, showing considerable year-to-year variations in its extent and duration (Brown et al., 2017). Around one-sixth of the global population depends on seasonal snow as their primary source of freshwater (Barnett et al., 2005). The amount of water stored in snow is commonly expressed as Snow Water Equivalent (SWE), or snow mass. However, current methods for assessing SWE—using satellite sensors, ground-based networks, and Earth system models—provide insufficient accuracy with significant discrepancies and biases across approaches (Mudryk et al., 2015). This represents a critical knowledge gap, particularly as projected changes in temperature and precipitation in the northern hemisphere are expected to increase fluctuations in seasonal snow mass, jeopardizing its reliability as a freshwater resource (Mankin et al., 2015; Sturm et al., 2017). A promising method for monitoring SWE from space is based on an interferometric SAR (InSAR) technique from repeat-pass orbits (Guneriussen et al., 2001). The approach has been demonstrated to provide SWE estimates using ground observations (Leinss et al., 2015), airborne campaigns (Hoppinen et al., 2024) and recently, space-borne SAR (Jorge Ruiz et al., 2024; Oveisgharan et al., 2024). Several upcoming satellite missions will dramatically increase the availability of suitable, low-frequency SAR observations which are optimal for the method; these missions include the NASA-ISRO Synthetic Aperture Radar (NISAR) and the Radar Observing System for Europe at L-band (ROSE-L). Moreover, Hydroterra+, a candidate mission for the European Space Agency’s 12th Earth Explorer opportunity, proposes to use a SAR in geosynchronous orbit to observe the water cycle, including snow cover. The high temporal frequency of SAR imaging enabled by the orbit configuration makes observations of SWE possible even at the applied C band. Together with ROSE-L and NISAR, Hydroterra+ presents unique new opportunities for monitoring snow cover parameters, including SWE, making use of advantages presented by different wavelengths (Belinska et al., 2024). In order to study the opportunities presented by multi-frequency SAR observations, as well as the high temporal frequency provided by a mission such as Hydroterra+, the ESA Wide-Band Scatterometer (WBSCAT) ground-based radar system (Werner et al., 2019) has been deployed in Sodankylä, Finland, to observe snow cover signatures for three winter seasons. WBSCAT is a polarimetric scatterometer operating at 1-40 GHz, installed on a pan/tilt positioner which provides the capability to obtain measurements from a range of angles in azimuth and elevation. In Sodankylä, the instrument has been deployed on a platform at a height of 5 m overlooking a boreal wetland (fen). The wetland typically freezes over the winter, presenting a relatively smooth subnivean surface. Snow depth at the site is typically up to 80 cm in midwinter. Radar measurements are supported by instruments collecting ancillary data on snow depth, SWE, snow and ground temperature profiles, and weather conditions. Automated measurements are complemented by weekly manual snow profile observations, which measure the stratigraphy, density and temperature profiles, grain size and grain type, liquid water content, and the specific surface area (SSA) of snow. In addition, a suite of passive microwave radiometers operating at 1.4, 10.65, 18.7, 21, 36.5, 89 and 150 GHz are operated at the site, observing the same area as WBSCAT. In this study, we present initial results on the WBSCAT campaign in Sodankylä, presenting an analysis of • radar coherence conservation versus environmental factors (snow accumulation and redistribution, snow melt events) at X-, C-, and L-Bands, analyzing the impact on InSAR SWE retrieval at different frequencies and frequency combinations, • C-band SAR Image quality for long term focusing intervals during variable surface conditions such as snowfall, snow melt and freeze, • the response of radar backscatter at L-to Ka bands to changing physical snow parameters (snow melt events and liquid water content, snow height SWE, changing ground conditions, etc.), • radar signatures against passive microwave observations at different wavelengths. Barnett, T.P., Adam, J.C., Lettenmaier, D.P., 2005. Potential impacts of a warming climate on water availability in snow-dominated regions. Nature 438, 303–309. https://doi.org/10.1038/nature04141 Belinska, K., Fischer, G., Parrella, G., Hajnsek, I., 2024. The Potential of Multifrequency Spaceborne DInSAR Measurements for the Retrieval of Snow Water Equivalent. IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 17, 2950–2962. https://doi.org/10.1109/JSTARS.2023.3345139 Brown, R., Schuler, D., Bulygina, O., Derksen, C., Luojus, K., Mudryk, L., Wang, L., Yang, D., 2017. Arctic terrestrial snow cover, in: Snow, Water, Ice and Permafrost in the Arctic (SWIPA) 2017. Arctic Monitoring and Assessment Programme, Oslo, Norway. Guneriussen, T., Hogda, K.A., Johnsen, H., Lauknes, I., 2001. InSAR for estimation of changes in snow water equivalent of dry snow. IEEE Trans. Geosci. Remote Sensing 39, 2101–2108. https://doi.org/10.1109/36.957273 Hoppinen, Z., Oveisgharan, S., Marshall, H.-P., Mower, R., Elder, K., Vuyovich, C., 2024. Snow water equivalent retrieval over Idaho – Part 2: Using L-band UAVSAR repeat-pass interferometry. The Cryosphere 18, 575–592. https://doi.org/10.5194/tc-18-575-2024 Jorge Ruiz, J., Merkouriadi, I., Lemmetyinen, J., Cohen, J., Kontu, A., Nagler, T., Pulliainen, J., Praks, J., 2024. Comparing InSAR Snow Water Equivalent Retrieval Using ALOS2 With In Situ Observations and SnowModel Over the Boreal Forest Area. IEEE Trans. Geosci. Remote Sensing 62, 1–14. https://doi.org/10.1109/TGRS.2024.3439855 Leinss, S., Wiesmann, A., Lemmetyinen, J., Hajnsek, I., 2015. Snow Water Equivalent of Dry Snow Measured by Differential Interferometry. IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 8, 3773–3790. https://doi.org/10.1109/JSTARS.2015.2432031 Mankin, J.S., Viviroli, D., Singh, D., Hoekstra, A.Y., Diffenbaugh, N.S., 2015. The potential for snow to supply human water demand in the present and future. Environ. Res. Lett. 10, 114016. https://doi.org/10.1088/1748-9326/10/11/114016 Mudryk, L.R., Derksen, C., Kushner, P.J., Brown, R., 2015. Characterization of Northern Hemisphere Snow Water Equivalent Datasets, 1981–2010. Journal of Climate 28, 8037–8051. https://doi.org/10.1175/JCLI-D-15-0229.1 Oveisgharan, S., Zinke, R., Hoppinen, Z., Marshall, H.P., 2024. Snow water equivalent retrieval over Idaho – Part 1: Using Sentinel-1 repeat-pass interferometry. The Cryosphere 18, 559–574. https://doi.org/10.5194/tc-18-559-2024 Sturm, M., Goldstein, M.A., Parr, C., 2017. Water and life from snow: A trillion dollar science question. Water Resources Research 53, 3534–3544. https://doi.org/10.1002/2017WR020840 Werner, C., Suess, M., Wegmuller, U., Frey, O., Wiesmann, A., 2019. The Esa Wideband Microwave Scatterometer (Wbscat): Design and Implementation, in: IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium. Presented at the IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, IEEE, Yokohama, Japan, pp. 8339–8342. https://doi.org/10.1109/IGARSS.2019.8900459
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: 30-years (1991-2021) Snow Water Equivalent Dataset in the Po River District, Italy through EO images, in-situ data and physical modeling

Authors: Matteo Dall'Amico, Stefano Tasin, Federico Di Paolo, Marco Brian, Paolo Leoni, Francesco Tornatore, Giuseppe Formetta, John Mohd Wani, Riccardo Rigon, Gaia Roati
Affiliations: Waterjade Srl, Po River Basin District Authority, Department of Civil, Environmental and Mechanical Engineering - University of Trento, C3A - Center Agriculture Food Environment - University of Trento
Snow is a critical component of the mountain cryosphere, playing a significant role in shaping hydrology and climate dynamics . As an essential interface between the Earth’s surface and the atmosphere, snow influences other cryospheric elements such as glaciers and permafrost. The snowpack functions as a vital water reservoir, accumulating during the winter and gradually releasing water during the melt season, thereby sustaining downstream water demands. However, snow is highly sensitive to climate change, particularly in low- and mid-elevation mountain regions. Changes in snow occurrence, melt timing, and variability can directly affect water availability in snowfed basins, with significant implications for both ecosystems and human populations. In Europe, most river basins originate in the Alps, often referred to as the “water tower of Europe”. In the Alps, snow is an important cryospheric component, playing an essential role in meeting the agricultural, domestic and industrial water needs in the lowlands. Generally, the amount of water stored in a snowpack is defined in terms of snow water equivalent (SWE), i.e., the equivalent amount of water that would result after melting the entire snowpack. We present a long-term SWE dataset in the Po River District, Italy, spanning from 1991 to 2021 at daily time step and 500 m spatial resolution partially covering the mountain ranges of Alps and Apennines. The basin of Po river, the largest in Italy, is considered as the second most sensitive area in Europe after the Rhone River basin, and it has been exposed to severe drought in the last years. The data has been generated using a hybrid modelling approach integrating the physically-based GEOtop model, preprocessing of the meteorological data, and assimilation of in-situ snow measurements and Earth Observation (EO) snow products to enhance the quality of the model estimates. In particular, EO-retrieved Snow Cover Area (SCA) maps are used to correct the melting rate in the physical model, in order to correctly retrieve the SWE variations, especially during the melting season. A rigorous quality assessment of the dataset has been performed at different control points selected based on reliability, quality, and territorial distribution. The comparison between simulated and observed snow depth across control points shows the accuracy of the dataset in simulating the normal and relatively high snow conditions, respectively. Additionally, satellite snow cover maps have been compared with simulated snow depth maps, as a function of elevation and aspect. 2D Validation shows accurate values over time and space, expressed in terms of snowline along the cardinal directions. Such dataset, reporting the longest time series of coherent SWE cartography at daily aggregation in Italy, fills an important gap in the scientific understanding of the hydrology in the area, and can be used for hydrological and climatological purposes such as the drought characterization in mid-latitude Mediterranean areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.05.01 - POSTER - Using earth observation to assess climate change in cities

The Intergovernmental Panel for Climate Change (IPCC) Sixth Assessment report concluded that "Evidence from urban and rural settlements is unequivocal; climate impacts are felt disproportionately in urban communities, with the most economically and socially marginalised being most affected (high confidence)." (IPCC, WG2, Chapter 6)

In its Seventh Assessment Cycle, the IPCC will produce a Special Report on Climate Change and Cities to further develop the role of climate and its interactions with the urban environment. The report will cover topics that include:
- Biophysical climate changes;
- Impacts and risks, including losses and damages and compounding and cascading aspects;
- Sectoral development, adaptation, mitigation and responses to losses and damages;
- Energy and emissions;
- Governance, policy, institutions, planning and finance; and
- Civil society aspects.

This session calls for abstracts demonstrating how Earth Observation is being used to understand how climate change is impacting cities and how EO can be used to adapt and mitigate further climate change on the city scale. This session's abstracts should explicitly link the use of EO data and assessing their usefulness for small scale urban/cities information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: CLIM4cities: from Citizen Science, Machine Learning and Earth Observation towards Urban Climate Services

Authors: Dr. Ana Oliveira, Mark Payne, Manvel Khudinyan, João Paixão, Inês Girão, Rita Cunha, Bruno Marques, Maria Castro, Élio Pereira, Caio Fonteles, Peter Thejll, Irene Livia Kruse, Hjalte Sørup, Erika Hayashi, Rune Zeitzen, Kasper Stener, Chiara Bearzotti, Sebastian
Affiliations: CoLAB +Atlantic, DMI
As climate change prospects point towards the pressing need for local adaptation strategies, exposure to extreme weather events becomes one of the most important aspects in determining our society’s resilience in the future. Globally, we are already experiencing changing patterns of exposure to certain types of extremes (e.g., wildfires in high latitudes, droughts in midlatitudes, flash floods in riverine and coastal areas); and, at the European level, recent historical weather measurements are already showing a changing climate, where heatwaves (HW) are becoming longer, more frequent and intense, while cold waves (CW) show only minor or non-significant changes. This temperature amplitude increase is a major challenge for our highly urbanised and ageing society, as the health and energy sectors are deeply affected by air temperature conditions. Att the local level, these conditions are strongly influenced by the energy exchanges between the lower atmosphere and our strongly modified urban surfaces. Indeed, such extremes can lead to significant impacts, such as excess mortality/morbidity and unmet peak electricity demand. To address these challenges, CLIM4cities - a European Space Agency (ESA)-funded project under the call for Artifical Intelligence (AI) Trustworthy Applications for Climate - aims to pioneer the development of Machine Learning (ML) and Artificial Intelligence (AI) models designed to downscale air and land surface temperature predictions in urban areas. This initiative serves as a preliminary step towards the implementation of cost-effective Integrated Urban Climate and Weather components into local Digital Twin Systems. By leveraging crowdsourced data obtained from citizens owned weather stations, Earth Observation and weather forecasting models, we offer spatio-temporal data fusion models that can solve the unmet need for a low-cost, efficient and scalable Urban Climate prediction system. To achieve this, CLIM4cities has tailored its solution to the requirements of local early adopters, who state the need for tools that offers both early warning weather forecast capabilities, as well as scenario-making capabilities to evaluate climate adaptation measures, namely the impact of blue-green infrastructures on the Urban Heat Island effect. Currently, CLIM4cities has already developed the first version of its coupled ML-based near-surface Air Temperature (henceforth, T2m) and Land Surface Temperature (LST) downscaling models, targeting four metropolitan areas in Denmark, proving the concept’s reliability and scalability to other urban regions. In relation to the LST results, Sentinel-3 LST and Synergy (NDVI) products were used to train several data-driven models, and performances were compared with Landsat 8/9 for unbiased validation purposes. To improve the accuracy and precision of LST downscaling, we combined ML techniques with disaggregation algorithms (DisTrad) to create a Non-Linear DisTrad (henceforth NL-DisTrad) method, in order to capture spatial and vegetation-related temperature patterns. This hybrid approach enhances the downscaling process by enabling the non-linear modelling of relationships between LST and auxiliary variables such as NDVI, while retaining DisTrad’s structured spatial disaggregation. The achieved results show that model performance varies according to the season - R² was 0.67, 0.51 and 0.56 in the Summer, Spring and Autumn, respectively. Concerning the T2m results, based on the evaluation metrics for both the time and space fine-tuning datasets, the Random Forest (RF) model was also the one achieving the best results. In particular, overall performance was compared to that during HW and CW days, and a sensitivity analysis conducted on the hyperparameters. The best results were achievable opting for a maximum RF depth of 50, with an overall R² of 0.98, which is the same during HW but reduced to 0.97 during CW events. In terms of error metrics, the MAE was 0.74K, 0.63K and 0.81K (overall, HW and CW subsets, respectively) denoting the model’s good performance during extreme heat conditions. Together, these results offer not only an improved level of spatial detail, but also enhance the accuracy of local measures of T2m and LST, compared to the lower resolution inputs. In the next steps, CLIM4cities will demonstrate how this concept can be transformed into a scalable solution for European metropolitan areas, by using Danish case studies to pave the way for a Proof of Concept Application featuring downscaled short-term predictions and climate change scenarios. This Proof of Concept framework builds on previous pilot implementations in Lisbon and Naples, providing a strong foundation for future development as a tool for local authorities to: (i) identify short-term critical areas of the city during heat- and coldwave events, (ii) test the performance of urban development scenarios in response to climate change and urban spatial planning policies, and (iii) translate essential climate variables into impact indicators relevant to the health and energy sectors. Overall, CLIM4cities contributes to scientific advances and user uptake of Earth Observation data. We are adding to international efforts on addressing key urban climate monitoring and local climate change adaptation challenges by bringing the Earth Observation and crowdsourced-based Machine Learning models closer to user requirements for local Digital Twin-ready operational climate and weather monitoring services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: UpGreen: EO-based Urban Green Assessment, Prediction and Vision

Authors: M.Sc. DAMIAN HRUBAN, M.A. Jan Labohý, Ph.D. MIKULAS MURON, Ph.D. LENKA FOLTYNOVA, M.A. MARTIN VOKRAL
Affiliations: World from Space, ASITIS, Atregia
World from Space is developing UpGreen, a service that accurately assesses, predicts, and proposes urban green infrastructure using advanced Earth Observation and geospatial methods. Currently, it is being piloted over Copenhagen and Lisbon. A comprehensive understanding of the dynamics of urban greenery is ensured by using multi-sensor, multi-resolution, and multi-temporal approaches. The service comprises three subsequently interdependent modules: UpGreen Assessment, UpGreen Prediction, and UpGreen Vision. UpGreen service is based on open, city-owned and commercial EO and non-EO data. Temporal series of PlanetScope and Landsat 8/9 are processed in data-cube: cloud masking, co-registration, spectral indices computation. Such time series are cleaned via drop handling and smoothing methods. Incorporated open data include OpenStreetMap, CMIP6 and ERA5, and Global Canopy Height data. City data include administrative division, RGBI aerial imagery, socio-economic data, elevation models and tree inventory for validation. UpGreen Assessment utilizes introduced data to provide detailed delineation, and segmentation of greenery segments and performs allocation of attributes and ecosystem services of urban green spaces. The information gathered per green segment includes for instance: greenery productivity state and trend, height, biomass, cooling effect, connectivity, accessibility for citizens, carbon sequestration, shaded area cast and others. UpGreen Prediction utilizes advanced trend analyses and arborist-based causal relationships applied to vast amounts of EO and other data to forecast future scenarios for urban green in 3 years ahead and for horizons 2035 and 2050. UpGreen Vision provides actionable insights for optimal urban green planning based on the city's preferred ecosystem services targets. The recommendations include suggesting the most effective tree placement distribution and its quantities to maximize environmental and socio-economic benefits. The service is a technical response to domain requirements gathered in the preceding ESA Feasibility Study. In summary, those are (1) holistic understanding and strategic planning of urban green, (2) trend analysis and forecasting urban green health, (3) data interoperability for better stakeholder engagement. UpGreen demonstration pilot is currently being developed within an ESA project: Development and Verification of Urban Analytics (4000143727/24/I-DT). It has been validated over tree inventory ground-truth data collected by arborists (consortium partner Atregia) and the Department of Xylogenesis and Biomass Allocation (Czech Global Change Research Institute). Beyond that, a comparative analysis with a 360° ground panorama view from multiple time slices has been successfully performed. It has been concluded that the core metrics: productivity state and productivity trend indicators refer to the overall productivity/photosynthetic activity of urban greenery, which reflects parameters of greenness, physiological age, dimensions, stress and vitality, assessed through spectral-based phenology metrics derived from satellite data. The prototype visualised in the user interface on sample AOI within Copenhagen is to be found under a non-public link: http://wfs-upgreen-website.s3-website.eu-central-1.amazonaws.com/ The business model with go-to-market activities and first partnerships are already set up and fully operational commercial product-as-a-service is scheduled to be completed after the end of the project. A consortium partner, ASITIS will be UpGreen's product manager. More information on the ESA project and designed product: https://business.esa.int/projects/upgreen UpGreen will assist cities in making informed decisions towards sustainable urban development by enhancing ecosystem services, urban resilience, and citizen well-being through efficient nature-based solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: T4 version of intelligent space-borne data-fusion for Smart Cities governance

Authors: Paolo Caporossi, Gerardo Ferraioli, Ing. Stefano Cruciani, Prof. Lorenzo Lipparini, Giovanni Quacquarelli
Affiliations: Titan4 S.r.l., Department of Earth Science, University of Roma Tre
Metropolitan cities are at the forefront of the global shift toward Smart Cities, leveraging cutting-edge technologies to enhance sustainability, resilience, and competitiveness (Kucherov et al., 2017; Josipovic & Viergutz, 2023). Within this context, the Space Economy plays a pivotal role by integrating satellite-derived data with advanced digital technologies to mitigate natural hazards impacting critical infrastructures such as transportation networks (roads and railways), energy grids, and water systems. These systems are increasingly vulnerable to geohazards, including landslides, floods, subsidence, and heatwaves - phenomena exacerbated by the ongoing effects of climate change. This study presents a comprehensive methodology that combin satellite data (radar, multispectral optical, infrared IR, and atmospheric sensors), and field observations by using advanced technologies such as Artificial Intelligence (AI) and Data Meshing within a unified operational ecosystem. This integrative approach represents a vital step toward the efficient management and sustainable maintenance of urban infrastructures. Radar satellites, such as Sentinel-1, provide millimetric precision in monitoring ground and structural deformations via Interferometric Synthetic Aperture Radar (InSAR), allowing for the early identification of geotechnical and structural risks. Multispectral optical sensors, such as Sentinel-2, deliver critical insights into land cover, permeability level and vegetation, supporting hydrogeological risk mitigation. Infrared sensors, including Landsat and MODIS, detect thermal anomalies indicative of urban heat islands, water leakage, and energy network irregularities. Atmospheric monitoring satellites like Sentinel-5P offer precise data on pollutants (e.g., NO2, CO, O3), essential for environmental mitigation strategies. The integration of satellite data with field measurements—including geotechnical surveys, hydrometeorological data, and structural information—further enriches the analytical framework. Additionally, Internet of Things (IoT) devices, equipped with distributed sensors, provide real-time data on atmospheric, hydrological, and infrastructural conditions, significantly enhancing situational awareness. These heterogeneous datasets, increasingly available through open-source platforms and at high temporal and spatial resolutions are integrated, in this study, within Geographic Information Systems (GIS) platforms. Advanced machine learning algorithms enable the analysis and synthesis of these data, generating predictive models and detailed risk assessments. This research employed custom-developed software, utilizing Python-based scripts to ensure full interoperability across diverse systems and seamless data exchange between devices. A dedicated monitoring dashboard was designed to consolidate sensor data, processing outputs, and risk analysis into an intuitive and actionable interface, facilitating operational planning and decision-making. The proposed approach can support a substantial improvement in the capacity for monitoring and managing metropolitan areas, potentially enabling near real-time assessment of natural hazard impacts on critical infrastructure. By leveraging AI-driven analysis and integrated visualization tools, this approach appear able to provide actionable insights in supporting the prioritization of interventions and optimization of resource allocation. This framework has also empowered urban administrations and infrastructure managers to implement targeted interventions, enhance resource efficiency, and promote resilience and sustainability, aligning with global objectives for sustainable development and climate change mitigation. The findings highlight the importance of integrating Space Economy resources with digital innovation to advance the resilience and sustainability of Smart Cities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Study of Erosion in Oil Extraction Fields Based on Interferometric Techniques - The Case of the Ghawar Oil Field (Saudi Arabia)

Authors: Joana Vines
Affiliations: Tecnico Lisboa
The world's largest oil reservoir, Ghawar, stretches across a vast area of about 280,000 square kilometers in Saudi Arabia's Eastern Province. With an estimated 75 billion barrels of recoverable oil reserves, it's an incredible source of energy. Since its production began in 1951, Ghawar has been a crucial player in the global energy market. Its complex reservoir structure, with multiple oil-bearing layers extending from 1,000 meters to an impressive 6,000 meters, presents both challenges and opportunities for oil exploration and extraction. In Ghawar Field, waterflooding is the main approach for extracting oil. Over a million barrels of water are pumped into the reservoir every day to force the oil towards extraction wells. Being an extremely desertic and arid area, this water that’s being brought artificially to this area, may be influencing the rate with each erosion is acting in place. On the other hand, the permanence of this production field and its workplaces along the reservoir, may be strengthening the development of neighbor conurbations. To assess the intensity and significance of the erosion processes in this area, Sentinel-1 imagery will be leveraged. Using the SNAP tool, annual Digital Elevation Models (DEMs) will be derived from Synthetic Aperture Radar (SAR) images acquired between 2015 and 2024 through interferogram generation. This approach will allow for a detailed examination of the land profile evolution, providing critical insights into changes in topography and the progression of erosion over nearly a decade. Using Sentinel-2 imaging, one is studying the urban development of Al Hufüf, the closest metropolis to Ghawar. This study seeks to examine the growth dynamics and land use transformations in Al Hufüf using the high-resolution multispectral data from the Sentinel-2 satellite. The study employs remote sensing techniques to classify and monitor urban land cover such as Support Vector Machine, to ensure precise identification of various urban features such as roads, buildings, vegetation, and barren land. The application of the Normalized Difference Vegetation Index (NDVI) will aid in differentiating vegetated areas from non-vegetated ones, offering valuable insights into the city's green spaces.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Predictability of the Summer 2022 Yangtze River Valley Heatwave in Multiple Seasonal Forecast Systems

Authors: Prof. Lijuan Chen
Affiliations: China Meteorological Administration Key Laboratory for Climate Prediction Studies, National Climate Centre,CMA, Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science and Technology
Predictability of the Summer 2022 Yangtze River Valley Heatwave in Multiple Seasonal Forecast Systems Jinqing ZUO¹², Jianshuang CAO³⁴, Lijuan CHEN*¹², Yu NIE¹², Daquan ZHANG¹², Adam A SCAIFE⁵⁶, Nick J. DUNSTONE⁵, and Steven C HARDIMAN⁵ 1 China Meteorological Administration Key Laboratory for Climate Prediction Studies, National Climate Centre, Beijing 100081, China 2 Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science and Technology, Nanjing 210044, China 3 Nansen-Zhu International Research Centre, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China 4 University of Chinese Academy of Sciences, Beijing 100049, China 5 Met Office Hadley Centre, Exeter EX1 3PB, United Kingdom 6 University of Exeter, Exeter EX4 4QF, United Kingdom ABSTRACT The Yangtze River Valley (YRV) include large and medium-sized cities in China, and meteorological disasters in the region will have a major impact on economic development and people's lives. In July and August 2022,the region experienced record-breaking heatwaves. The characteristics, causes, and impacts of this extreme event have been widely explored, but its seasonal predictability remains elusive. This study assessed the real-time one-month-lead prediction skill of the summer 2022 YRV heatwaves using 12 operational seasonal forecast systems. Results indicate that most individual forecast systems and their multi-model ensemble (MME) mean exhibited limited skill in predicting the 2022 YRV heatwaves. Notably, after the removal of the linear trend, the predicted 2-m air temperature anomalies were generally negative in the YRV, except for the Met Office GloSea6 system, which captured a moderate warm anomaly. While the models successfully simulated the influence of La Niña on the East Asian–western North Pacific atmospheric circulation and associated YRV temperature anomalies, only GloSea6 reasonably captured the observed relationship between the YRV heatwaves and an atmospheric teleconnection extending from the North Atlantic to the Eurasian mid-to-high latitudes. Such an atmospheric teleconnection plays a crucial role in intensifying the YRV heatwaves. In contrast, other seasonal forecast systems and the MME predicted a distinctly different atmospheric circulation pattern, particularly over the Eurasian mid-to-high latitudes, and failed to reproduce the observed relationship between the YRV heatwaves and Eurasian mid-to-high latitude atmospheric circulation anomalies. These findings underscore the importance of accurately representing the Eurasian mid-to-high latitude atmospheric teleconnection for successful YRV heatwave prediction. Key words: the summer 2022 YRV heatwaves; real-time prediction skill; operational seasonal forecast systems; Eurasian mid-to-high latitude teleconnection Article Highlights: (1)Most models predicted cold anomalies after removing the long-term trend for the 2022 record-breaking heatwaves in the YRV. (2)GloSea6 stands out as the only model predicting a moderate warm anomaly after removing the linear warming trend. (3)The underestimated warm anomalies are linked to the deficiency of the models in simulating the relation between YRV heatwaves and the Eurasian teleconnection. *presenter & corresponding author,chenlj@cma.gov.cn
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Projection of Precipitation and Temperature in Major Cities in Pakistan Using Multi-Model Ensembles

Authors: Fahad Shah
Affiliations: Graduate School of Smart Society Practical Sciences, Hiroshima University,
This study evaluates future projections of variation in monthly precipitation and average temperature in major cities of Pakistan. Using 16 General Circulation Models (GCMs) from Coupled Model Intercomparison Project Phase 6 (CMIP6), the analysis constructs multi-model ensembles (MMEs) by selecting GCMs that best match observed historical data through an Artificial Neural Network (ANN) based statistical downscaling approach. The performance of these models was assessed using five statistical metrics: Correlation Coefficient, Nash–Sutcliffe Efficiency, Root Mean Squared Error, Kling–Gupta Efficiency, and the Modified Index of Agreement. The results show that MMEs outperform individual GCMs in simulating historical temperature and precipitation trends across the cities. Projections for 2024–2100, based on four Shared Socioeconomic Pathways (SSP1-2.6, SSP2-4.5, SSP3-7.0, and SSP5-8.5), reveal a decline in annual precipitation by 39.22%, 48.79%, 36.27%, and 38.08%, respectively. In terms of temperature, maximum temperature is projected to rise by 5.95% (+1.85°C), 12.79% (+3.97°C), 9.86% (+3.06°C), and 16.22% (+5.04°C), while minimum temperature is projected to decrease by 4.25% (-0.76°C) and 0.74% (-0.13°C) under SSP1-2.6 and SSP2-4.5, respectively. However, under SSP3-7.0 and SSP5-8.5, the results show that minimum temperature is expected to increase by 0.20% (+0.04°C) and 7.26% (+1.30°C), respectively. The greatest potential for precipitation decline is seen in Islamabad, Multan, and Sialkot. At the same time, higher increases in maximum temperature are expected in high-altitude cities like Quetta and Peshawar compared to low-altitude areas. This study provides essential insights to help policymakers and stakeholders develop targeted strategies for addressing the impacts of climate change in cities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.01.03 - POSTER - Fourier Transform Spectroscopy for Atmospheric Measurements

Fourier Transform Spectroscopy (FTS) is a powerful technique for atmospheric observations allowing the Earth' and atmosphere's thermal radiation to be sampled with high spectral resolution. This spectral range carries profile information of many atmospheric gases (water vapour, carbon dioxide, nitrous oxide, methane, ammonia, nitric acid,...), but also information on cloud (e.g. phase or liquid/ice water path) and aerosol (e.g. dust optical depth). Measurements have been performed from satellite (nadir and limb), from ground, or with airborne platforms for several decades and have recently come into the foreground in ESA's Earth Explorer (EE) programme with the EE9 FORUM mission and the EE11 candidate CAIRT, both aiming to fly in convoy with the FTS IASI-NG on MetOp-SG. The Infrared Sounder (IRS) will be launched on MTG-S1 in 2025. In addition, new airborne and ground-based instruments became available with performance and versatility that allow for innovative research applications. This session invites presentations on:
- retrieval algorithms and methods for uncertainty quantification including calibration/validation techniques for existing and future missions,
- new spectrometer developments for field work and satellite applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Stratospheric and upper tropospheric measurements of long-lived tracers and photochemically active species of the nitrogen, chlorine, and bromine families with GLORIA-B

Authors: Gerald Wetzel, Sören Johansson, Felix Friedl-Vallon, Michael Höpfner, Jörn Ungermann, Tom Neubert, Valéry Catoire, Cyril Crevoisier, Andreas Engel, Thomas Gulde, Patrick Jacquet, Oliver Kirner, Anne Kleinert, Erik Kretschmer, Dr Johannes Laube, Guido Maucher, Hans Nordmeyer, Christoph Piesch, Peter Preusse, Markus Retzlaff, Tanja Schuck, Wolfgang Woiwode, Martin Riese, Peter Braesicke
Affiliations: Institute of Meteorology and Climate Research Atmospheric Trace Gases and Remote Sensing (IMKASF), Karlsruhe Institute of Technology, Institute of Climate and Energy Systems - Stratosphere (ICE-4), Forschungszentrum Jülich, Central Institute of Engineering, Electronics and Analytics - Electronic Systems (ZEA-2), Forschungszentrum Jülich, Laboratoire de Physique et Chimie de l’Environnement et de l’Espace (LPC2E/CNRS), Université Orléans, Laboratoire de Météorologie Dynamique, IPSL, CNRS, Institute for Atmospheric and Environmental Sciences, Goethe Universität, Scientific Computing Center (SCC), Karlsruhe Institute of Technology, Business Area Research and Development, Deutscher Wetterdienst
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is a limb-imaging Fourier-Transform spectrometer (iFTS) providing mid-infrared spectra with high spectral sampling (0.0625 cm-¹ in the wavenumber range 780-1400 cm-¹). GLORIA is a demonstrator for the Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT), one of the remaining two candidates for the ESA Earth Explorer 11 mission. A version of GLORIA dedicated to deployment on aircraft has been successfully flown on seven research campaigns up to 2023, with a further one planned for March 2025. In order to enhance the vertical range of GLORIA to observations in the middle stratosphere albeit still reaching down to the middle troposphere, the instrument was adapted to measurements from stratospheric balloon platforms. GLORIA-B performed its first flight from Kiruna (northern Sweden) in August 2021 and its second flight from Timmins (Ontario/Canada) in August 2022 in the framework of the EU Research Infrastructure HEMERA. The objectives of GLORIA-B observations for these campaigns have been its technical qualification and the provision of a first imaging hyperspectral limb-emission dataset from 5 to 36 km altitude. Scientific objectives are discussed, which are, amongst many others, the diurnal evolution of photochemically active species belonging to the nitrogen (N₂O₅, NO₂), chlorine (ClONO₂), and bromine (BrONO₂) families and the retrieval of SF₆, an important molecule for determining the mean age of air. In this contribution we demonstrate the performance of GLORIA-B with regard to level-2 data of the flight in August 2021, consisting of retrieved altitude profiles of a variety of trace gases. We will show examples of selected results together with uncertainty estimations, altitude resolution as well as long-lived tracer comparisons to accompanying in-situ datasets. Combined error bars of the instruments involved were calculated in order to determine whether a detected difference between measurements of the instruments is significant or not. In addition, diurnal variations of photochemically active gases are compared to simulations of the chemistry climate model EMAC. Calculations largely reproduce the temporal variations of the species observed by GLORIA-B.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Observations of dichloromethane-rich air masses transported from the Asian summer monsoon region across the Pacific, Alaska and Canada

Authors: Wolfgang Woiwode, Sören Johansson, Jörn Ungermann, Markus Dick, Felix Friedl-Vallon, Norbert Glatthor, Jens-Uwe Grooß, Thomas Gulde, Michael Höpfner, Jan Kaumanns, Anne Kleinert, Erik Kretschmer, Valentin Lauther, Guido Maucher, Tom Neubert, Hans Nordmeyer, Christof Piesch, Felix Plöger, Peter Preusse, Markus Retzlaff, Sebastian Rhode, Heinz Rongen, Georg Schardt, Björn-Martin Sinnhuber, Johannes Strobel, Franziska Trinkl, Ronja van Luijt, Stefan Versick, Bärbel Vogel, Michael Volk, Gerald Wetzel, Peter Braesicke, Peter Hoor, Martin Riese
Affiliations: Institute of Meteorology and Climate Research - Atmospheric Trace Gases and Remote Sensing (IMK ASF), Karlsruhe Institute of Technology, Institute for Climate and Energy Systems, Stratosphere (ICE-4), Forschungszentrum Jülich, Central Institute of Engineering, Electronics and Analytics - Electronic Systems (ZEA-2), Forschungszentrum Jülich, Institute for Atmospheric and Environmental Research, University of Wuppertal, Institute for Atmospheric Physics, Johannes Gutenberg University
Dichloromethane (CH₂Cl₂) is known to be the most abundant chlorinated very short-lived halogenated substance (VSLS) in the atmosphere and capable of delaying the recovery of the stratospheric ozone layer. Recent studies have shown that CH₂Cl₂ emissions in East Asia have been rising rapidly over the last decades. Here, we present unique 2-dimensional observations of the mesoscale structure of CH₂Cl₂-rich airmasses over the Pacific, Alaska and Canada that originated from the Asian summer monsoon (ASM) region. Observations by the infrared limb imager GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) aboard the German research aircraft HALO (High Altitude and LOng Range Research Aircraft) during the PHILEAS (Probing High Latitude Export of Air from the Asian Summer Monsoon) campaign document the size and structure of CH₂Cl₂-rich air masses that were transported over the Pacific in August and September 2023. High CH₂Cl₂ mixing ratios exceeding 450 pptv (~700% of northern hemispheric background) are found far away from their anticipated source. Together with chemistry transport modelling, backward trajectories and in situ observations, the GLORIA observations provide new insights into the long-range transport of CH₂Cl₂-rich airmasses from the ASM region in the free troposphere and tropopause region and show indications for mixing with lowermost stratospheric air. The combined results underline the importance of monitoring CH₂Cl₂ emissions. The GLORIA instrument is an airborne demonstrator for the ESA Earth Explorer 11 candidate CAIRT (Changing-Atmosphere Infra-Red Tomography explorer), which is currently under study in Phase A. CAIRT would provide global and continuous observations of a multitude of trace gases, including chlorinated species, and thus be very helpful to investigate important factors associated with stratospheric ozone recovery and atmospheric composition changes in general.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Versatile Fourier Transform Spectrometer Model for Future Earth Observation Missions

Authors: Mr. Tom Piekarski, Dr. Christophe Buisset, Dr. Anne Kleinert, Dr. Felix Friedl-Vallon, Mr. Arnaud Heliere, Dr. Julian Hofmann, Dr. Ljubiša Babić, Mr. Micael Dias Miranda, Dr. Tobias Guggenmoser, Mr. Daniel Lamarre, Dr. Flavio Mariani, Felice Vanin, Dr. Ben Veihelmann
Affiliations: European Space Agency, Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology
Infrared Fourier Transform Spectrometers (FTS) have widely been used in previous Earth Observation space missions to monitor the abundance of atmospheric gases (e.g., TANSO-FTS, ACE-FTS, MIPAS, IASI) and are planned to be used in several future missions such as MTG-IRS, FORUM or CAIRT. We have developed an FTS model that uses instrumental parameters as inputs, generates interferograms and reconstructs a scene spectrum with associated noise. This model has been implemented as a Python code with the primary objective of providing a tool for engineers and scientists i) to estimate FTS instrument performance in early mission phases, ii) to understand the relation between the instrument design and errors with the signal quality, and iii) to select the most suitable calibration strategy and interferogram numerical apodization. The model comprises a forward and a backward model. The forward model simulates the interferograms provided by the FTS and estimates the associated noise, for three kinds of sources: the cold and hot calibration black bodies and the Earth scene. These interferograms include the contributions of the detector, the thermal background and the instrumental effects such as self-apodization, and differential shearing, tilt and wavefront error. This forward model has been validated through comparison with real measurements from the airborne FTS GLORIA, a joint development of Karlsruhe Institute of Technology and Forschungszentrum Jülich. The forward model has also been used at ESA for the assessment of the Earth Explorer 11 CAIRT radiometric performance. The backward model reconstructs the scene radiance from these interferograms, by i) simulating the radiometric calibration, ii) retrieving the spectral radiance including the interferogram numerical apodization and zeros-filling, and iii) propagating the associated noise. The outputs of this model are the reconstructed spectral radiance, gain and offset with their associated noise. The FTS forward and backward theoretical models together with the python code have been developed in the frame of a Young Graduate Traineeship at ESA. They have been fully documented with the aim to be released in 2025-2026 for supporting future missions’ performance assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New Experimentally Derived Temperature-Dependent Refractive Index of Ice in the Infrared

Authors: Cecilia Taverna, Dr Jean-Blaise Brubach, Dr Marine Verseils, Dr Quentin Libois, Dr Laurent Labonnote, Dr Pascale Roy
Affiliations: Synchrotron Soleil, Laboratoire d’Optique Atmosphérique, Centre National de Recherches Météorologiques
The spectral distribution of the infrared energy emitted by the Earth is of utmost importance to understand the planet’s energy budget, especially under climate change. In this context, ice clouds are among the atmospheric constituents that have the largest impact on the infrared emission of Earth. To accurately estimate their radiative properties, a detailed knowledge of the ice complex refractive index (CRI) is required, as it fundamentally controls the cloud-radiation interactions. However, the ice CRI is still poorly known, especially in the far-infrared (FIR, 100-700 cm-1) and for temperatures relevant for ice clouds (170-270 K), for which the CRI variation is expected to be significant. This lack of data over a critical temperature range is due to experimental issues. Indeed, most previous experimental determinations of the ice CRI used direct condensation of water vapor on a cold plate (around 100 K) to form an ice film, making it impossible to reach temperatures above 195 K due to the sublimation of ice at this level of vacuum (around 10-6 mbar). To overcome the technical limits of previous studies, we developed a specific cell allowing the formation of ice films from 10 K up to 270 K. The cell is formed of a copper body with a central hole where two diamonds windows were placed separated by a spacer in polypropylene, the thickness of the spacer will define thicknesses of the sample (typically from 0.5 microns to 50 microns). In this configuration the sample is at ambient pressure during the entire experiment permitting us to overcome the temperature limit of sublimation. Furthermore, the copper body of the cell guarantees good thermal conductivity, and the diamond windows are transparent from the Far-Infrared up to the visible range. This last point allows to use an interference fringes system shining a UV-Visible lamp on the sample to determine the thickness of the sample in situ. This new cell combined with the synchrotron radiation of the AILES beamline of SOLEIL synchrotron facility allows us to obtain transmission spectra in all the infrared range (20-10000 cm-1) with optimized signal-to-noise ratio. Furthermore, with our setup we are also able to directly measure the fundamental parameters of the experiment (thickness and temperature of the ice film). Finally, we implemented a retrieval algorithm for estimating the CRI of ice from these transmission measurements, accounting for the complexity of the multi-layer system used. We were able to obtain for the first time the refractive index of water ice in the full infrared in the temperature range 150-270 K from experimental data. In this presentation, we will display new optical indexes in a wide temperature range and will compare them with the data available in the literature.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Independent Performance Validation of the Instrument Simulator Model of CAIRT’s End to End Performance Simulator

Authors: Jonathan Flunger, Felix Friedl-Vallon, Alex Hoffmann, Anne Kleinert, Valère Mazeau
Affiliations: EOP Climate Action, Sustainability and Science Department, European Space Agency (ESA/ESTEC), EOP Future Missions and Architecture Department, European Space Agency (ESA/ESTEC), Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT)
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT) is one of two candidate missions from which ESA’s Earth Explorer 11 will be selected for implementation in September 2025. CAIRT’s overarching science goal is to reveal, resolve, and unravel the complex coupling between composition, circulation, and climate – from the mid-troposphere to the lower thermosphere. To achieve this, CAIRT carries a hyperspectral infrared limb imaging Fourier Transform Spectrometer (FTS) that will observe Earth’s limb simultaneously between 5 and 115 kilometres of altitude, with unprecedented horizontal and vertical sampling. With this innovative instrument, CAIRT will produce a unique three-dimensional dataset of numerous trace gases, aerosols, and temperature, that will greatly improve our understanding of atmospheric gravity waves, circulation and mixing; the coupling with the upper atmosphere; the impacts of solar variability and space weather; and aerosols and pollutants in the upper troposphere and lower stratosphere. The evaluation of the CAIRT mission requirements and of the observation concepts is a crucial step in the early mission phases, and is conducted, among other tools, with the CAIRT end-to-end performance simulator (CEEPS). This simulator is a collection of software modules, simulating each part of the observation process from geophysical scenes to retrieved data products such as trace gas concentrations and temperature. At the heart of CEEPS lies the Instrument Simulator Module (ISM), which simulates the measurement process of the FTS. While the ISM has been verified and partially validated in the frame of CEEPS, an additional independent validation increases further the confidence in the quality of CEEPS assessment. Here, we present the results of a study where we evaluate, cross-compare, and critically discuss the performance of the CEEPS ISM.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The CAREVALAB mission to examine the UTLS by 3-D tomography

Authors: Jörn Ungermann, Sören Johansson, Samuele Del Bianco, Michael Höpfner, Peter Preusse, Piera Raspollini, Sebastian Rhode, Björn-Martin Sinnhuber, Gerald Wetzel
Affiliations: Forschungszentrum Jülich Gmbh, Karlsruhe institute for Technology, Istituto di Fisica Applicata “Nello Carrara” (IFAC), Consiglio Nazionale delle Ricerche (CNR)
The CAREVALAB project focuses on aircraft measurements that are planned within the framework of the Arctic Springtime Chemistry Climate Investigations campaign (ASCCI) in March 2025. Among other in-situ and remote sensing instruments, the GLORIA (Gimballed Limb Observer of the Atmosphere) instrument will be deployed on the German High Altitude and LOng range research aircraft (HALO). GLORIA is a limb imaging Fourier-transform spectrometer covering the spectral range from 780 to 1400 1/cm with a spectral sampling of up to 0.0625 1/cm. With its 2-D detector it measures more than 6000 spectra simultaneously. It is mounted in a gimbal, which allows stabilization on an aircraft and full control over the pointing including the capability to scan horizontally and measure in nadir direction. Apart from its use in a variety of scientific campaigns, it also serves as a prototype for the Earth Explorer 11 mission proposal CAIRT: over the past decade, we benefited greatly from operating GLORIA to assess the possible performance of the proposed satellite mission. So far GLORIA used dedicated circular flight patterns to derive 3-D trace gas volume mixing ratios in a manner similar to computer-tomography. This is efficient in terms of flight time, but poses significant mathematical hurdles on the ill-posed inversion process due to the three-dimensionality of the problem and limitations in available measurement angles, as it requires treating the whole 3-D volume and all measurements in a single mathematical optimization problem. The measurement pattern of limb sounder satellites allows a much simpler retrieval considering only 2-D cross-sections as demonstrated by operating satellites such as, e.g., MLS. Here, we plan to showcase the first measurements by GLORIA replicating the simpler measurement pattern of limb sounding satellites, in particular, the proposed CAIRT satellite, which will allow to derive a 3-D volume by processing separate 2-D atmospheric slices along the satellite track. This is only feasible by dedicating the full flight time of a HALO measurement flight towards acquiring the necessary data. The data are enhanced by nadir pointing measurements and co-located IASI spectra to allow a synergistic retrieval bypassing both shortcomings of limb- and nadir sounders. In this contribution, we will show the results of studies using synthetic data and first results from actual measurements acquired during the ASCCI measurement campaign.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The project CASIA for exploring the synergy between CAIRT and IASI-NG

Authors: Piera Raspollini, Flavio Barbara, Elisa Castelli, Ugo Cortesi, Francesco De Cosmo, Samuele Del Bianco Del Bianco, Bianca Maria Dinelli, Elisa Fabbri, Andrea Faggi, Marco Gai, Liliana Guidetti, Giuliano Liuzzi, Tiziano Maestri, Guido Masiello, Marco Menarini, Enzo Papandrea, Marco Ridolfi, Luca Sgheri, Sabrina Vittori
Affiliations: Istituto di Fisica Applicata Nello Carrara, Consiglio Nazionale delle Ricerche (IFAC-CNR), Istituto di Scienze dell’Atmosfera e del Clima, Consiglio Nazionale delle Ricerche (ISAC-CNR, Istituto per le Applicazioni del Calcolo, Consiglio Nazionale delle Ricerche (IAC-CNR), Section of Florence, University of Bologna, Department of Physics and Astronomy “Augusto Righi”, University of Basilicata, Department of Engineering, Istituto Nazionale di Ottica, Consiglio Nazionale delle Ricerche (INO-CNR)
The Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT) is one of the two candidates for ESA’s Earth Explorer 11. CAIRT aims to investigate the coupling between circulation and composition in the middle atmosphere and to study their interaction with climate change. For this purpose, a 3-dimensional knowledge of the atmosphere with high spatial resolution is needed. If selected, CAIRT will fly in loose formation with MetOp-SG satellite, bringing on board IASI-NG together with several other instruments looking at nadir. The two satellites will fly on the same orbit, dephased of about 27° to match IASI-NG Field of View with the region with the CAIRT tangent points (for each line of sight of a given acquisition, the tangent point is the point closer to the surface). Both CAIRT and IASI-NG rely on Fourier Transform Spectroscopy FTS and they have very similar characteristics in terms of spectral range (718 cm-1 to 2200 cm-1 for CAIRT and 645 cm-1 to 2760 cm-1 for IASI-NG) and resolution (0.4 cm-1 for CAIRT and 0.25 cm-1 for IASI-NG after apodisation). Both instruments exploit imaging detectors: in particular, for the first time from space, CAIRT will exploit a detector that simultaneously measures limb emission spectral radiance in two spatial dimensions, in altitude and horizontally (across-track, with a swath of about 400 km) and will allow very close (50 km distant) consecutive acquisitions along track. In this way, unprecedented (for limb measurements) horizontal resolution (50 km) both along track and across-track will be possible. IASI-NG will cover an even wider swath (about 2200 km) with a spatial sampling of about 25 km. The main difference between the two instruments is the observation geometry: CAIRT sounds the limb of the atmosphere, allowing measurements with high vertical resolution over the vertical extension down to 5 km and up to 115 km; IASI-NG performs nadir measurements, characterised by higher horizontal resolution but smaller vertical resolution, and has information on the lower and middle troposphere. Together, CAIRT and IASI-NG can provide information on several trace species from the lowest layers of the atmosphere to the top of the atmosphere, both during day and night. Other advantages of the synergy have been proven during CAIRT phase 0 for ozone and other trace species with the rigorous data fusion technique of Complete Data Fusion, with the results of combination being characterised by smaller total error and better spatial resolution. Synergy can help also for studying clouds: indeed CAIRT can provide information on the altitude, thickness of the clouds and aerosol plume. It excels in optically thin conditions but could deal well with all typical cirrus clouds in the upper troposphere. It can also detect volcanic ash plume. In turn, IASI-NG, for the same scattering layers, can provide information on the total column amount and optical and microphysical properties. Within such a framework, the project CASIA (CAIRT and Synergy with IASI-NG) funded by the Italian Space Agency (ASI) has the objective to prepare a set of tools to study and fully exploit the complementary information of the two instruments. CASIA aims at developing an innovative and validated forward model, fast and accurate for the simulation of CAIRT measurements and IASI-NG measurements, both in clear-sky and in presence of scattering layers, and for the computation of 2D (possibly 3D) analytical derivative ready to be included in a 2D(3D) retrieval of temperature, trace species, and possibly optical and microphysical properties of the clouds in the MIR spectral range. The activity aims at contributing to the development of CAIRT mission by consolidating secondary objectives of CAIRT, as the study of the synergy between limb and nadir measurements also in presence of clouds. We will present the activity made in the frame of this project and its first findings. Acknoledgements: This work is carried out within ASI-funded project agreement “CASIA” n. 2023-3-HB.0, CUP n. F93C23000430001.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: F.02.01 - POSTER - Harnessing the Power of Remote Sensing for Research and Development in Africa

Harnessing the power of remote sensing technology is instrumental in driving research and development initiatives across Africa. Remote sensing, particularly satellite imagery and big data analytics, provides a wealth of information crucial for understanding various aspects of the continent's environment, agriculture, and natural resources. This data-driven approach facilitates evidence-based decision-making in agriculture, land management, and resource conservation. Overall, remote sensing serves as a powerful tool for advancing research and development efforts in Africa, contributing to sustainable growth, environmental stewardship, and improved livelihoods across the continent. In this session, we aim to promote various initiatives fostering collaboration between African colleagues and those from other continents.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SLIM but Mighty: Transforming Zambia’s Future with EO Solutions

Authors: Tomas Soukup, Tomas Bartalos, Stepan Bubak
Affiliations: GISAT, People In Need
The Sustainable Landscape through Integrated Management (SLIM) initiative, a cornerstone of the Team Europe Initiative (TEI) "Climate Action for Inclusive Green Recovery and Growth in Zambia," exemplifies the transformative potential of Earth Observation (EO) technologies to address the challenges of sustainability, climate change, and resilience. With joint funding contributed by the European Union and the Czech Republic, SLIM operates as a bold collaboration for integrated landscape management, ecosystem restoration, and community-based natural resource governance. Spanning from 2023 to 2027, SLIM aligns closely with Zambia’s 8th National Development Plan (8NDP), Vision 2030, and revised Nationally Determined Contributions (NDCs) under the Paris Agreement on Climate Change. It directly supports the EU-Zambia Forest Partnership, focusing on sustainable forest management, biodiversity conservation, and fire and land-use monitoring. These priorities are embedded within the wider EU “Green Partnership and Investment Programme” framework, addressing agriculture, forestry, biodiversity, water, and climate at their nexus to drive ecological resilience and socio-economic development. At its core, SLIM leverages EO as a key enabler to catalyze change across multiple sectors. As EO technologies provide vital data and insights, enabling precise land and water monitoring, fire prediction and mitigation, drought management, and land-cover change assessment, the ultimate goal is to leverage these EO-driven solutions and integrate it into Zambia's decision-making processes, empowering local institutions such as the National Remote Sensing Center (NRSC) and the Ministry of Green Economy and Environment to achieve impactful, data-driven resource management. EO as a Catalyst for Impactful Change EO plays a pivotal role in the SLIM initiative, offering unique capabilities for data-driven management of Zambia’s natural resources. The initiative focus on leveraging existing EO based resource from Copernicus programme and beyond together with advanced methodologies, including AI and machine learning, to process and analyze EO data, uncovering patterns and trends critical for effective resource management. These approaches are complemented by a focus on integrating EO insights with local data sources and expertise, ensuring relevance and practicality for decision-makers. By demonstrating EO’s value in diverse application areas, SLIM will showcase how space-based data can transform traditional approaches to environmental and disaster management. This holistic use of EO highlights its potential to bridge the gap between scientific research and operational implementation, providing actionable intelligence for policymakers and stakeholders at all levels. A Multi-Disciplinary Approach The success of SLIM lies in its team’s multidisciplinary expertise, spanning remote sensing, geospatial analysis, environmental science, and capacity building with a strong local partners involvement. Led by People in Need (PIN) – the Czech-based international humanitarian and development organization, under the stewardship of Czech Development Agency (CzDA), this collaboration ensures that SLIM benefits from both cutting-edge technology and on-the-ground knowledge, enabling tailored solutions for Zambia’s unique challenges. SLIM’s integration into the Green Nexus framework further amplifies its impact, emphasizing the interconnectedness of water, food, and energy systems, and SLIM contributes by enhancing resource efficiency, reducing vulnerabilities, and promoting equitable growth. Capacity Building and Technology Transfer A core component of SLIM is its commitment to capacity building and technology transfer. Training programs, workshops, and user-centric service co-creation are embedded into the initiative, ensuring that Zambian institutions acquire the skills and knowledge needed to independently manage and sustain EO-based solutions. The initiative emphasizes collaboration with local stakeholders, fostering a sense of ownership and ensuring that developed systems and methodologies are both practical and sustainable. By transferring technology and know-how to Zambian institutions, SLIM strengthens their ability to leverage EO for improved decision-making, contributing to long-term resilience and self-reliance. Driving Qualitative Change SLIM is not just about addressing immediate environmental challenges; it aims to foster systemic change in how data and evidence are used to inform policy and practice in Zambia. The initiative’s emphasis on user engagement and co-creation ensures that EO-derived products are actionable and relevant, closing the gap between data availability and utilization. This contributes to a culture of evidence-based decision-making, promoting greater sustainability, resilience, and equity. Conclusion The SLIM initiative exemplifies the transformative potential of EO as a tool for sustainable development and resilience building. By combining cutting-edge EO technologies with local expertise and capacity-building efforts, SLIM delivers practical solutions that address Zambia’s environmental challenges while empowering its institutions and communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrated Use of Multisource Remote Sensing Data for National Scale Agricultural Drought Monitoring in Kenya

Authors: Gohar Ghazaryan, Dr. Maximilian Schwarz, S. Mohammad Mirmazloumi, Harison Kipkulei, Dr Tobias Landmann, Henry Kyalo, Rose Waswa, Tom Dienya
Affiliations: Leibniz Centre for Agricultural Lanscape Research, Remote Sensing Solutions GmbH, International Centre of Insect Physiology and Ecology, Regional Centre for Mapping of Resources for Development, Ministry of Agriculture and Livestock Development, Geography Department, Humboldt-Universität zu Berlin
Drought significantly affects agricultural systems, threatening crop yields, food security, and socio-economic stability. The availability of Earth Observation (EO) data has greatly enhanced drought monitoring by providing near real-time information on crop conditions. However, monitoring efforts primarily focused on identifying drought hazards rather than assessing their broader impacts and risks. While MODIS data has been instrumental in drought assessment, its mission is nearing completion, necessitating the integration of other datasets for effective decision-making. Moreover, a comprehensive understanding of drought risk and impact requires context-specific information, such as irrigation practices and cropping systems. Within the EO Africa National Incubator project ADM-Kenya, we co-developed solutions with several actors and stakeholders to create EO-based products assessing drought risk and impacts. Four operational EO-based products were produced—drought hazard and risk maps at a monthly scale, high resolution crop condition, irrigated/rainfed farming systems map, and downscaled evapotranspiration (ET)—at a national scale in Kenya. Additionally, a demonstration product for mapping mono and mixed cropping systems was generated specifically for Busia County. We selected Sentinel-2 and Sentinel-3 data as the primary sources for drought impact and risk assessment and for deriving agriculturally relevant information, including crop condition, evapotranspiration as well as information on farming systems, i.e., irrigated/ rainfed and mono/mixed cropping. Additionally, yield statistics, meteorological data, and information on phenology were used. Sentinel-2 time series and derived vegetation indices were used to monitor the impact of drought on agricultural systems by tracking intra-seasonal changes in croplands and classifying drought affected and unaffected areas using random forest. Complementary datasets, such as yield statistics, meteorological data, and phenological information, were integrated. Sentinel-2 time series and vegetation indices tracked intra-seasonal changes in croplands, classifying drought-affected areas using random forest with severity thresholds derived from baseline conditions. Furthermore, machine learning and a two-source energy balance model were employed to derive daily 20-m ET. Hazard and impact data were linked to spatially explicit farming systems, enriched with socioeconomic and environmental information, to support comprehensive drought risk assessments. Active participation from key stakeholders, including International Centre of Insect Physiology and Ecology (icipe), Regional Centre for Mapping of Resources for Development (RCMRD), and the Ministry of Agriculture, played a critical role in the co-design and validation of these products. Several rounds of user validation were carried out, where structured feedback was collected on the accuracy and usability of the output. Based on this feedback, the products were improved by changing the specifications and/or implementation steps. This ensured the tools addressed local needs and enhanced stakeholders' capacity to use EO data for agricultural monitoring and reporting, as well as further capacity-building efforts carried out by RCMRD. As part of routine reporting, the Ministry of Agriculture is planning to incorporate these datasets into its monthly Food Security and Agricultural Status Bulletin, improving decision-making and policy support. The project outputs also contribute to addressing data gaps in pest-climate interactions, supporting icipe’s climate-smart pest-resilient push-pull technology. Policy reports were an additional output of the project, focusing on drought impact and risk assessment, as well as irrigation and water use. These reports provided actionable recommendations for integrating EO-derived insights into national agricultural policies and strategies. Training sessions and knowledge-sharing initiatives strengthened the capacity of local stakeholders to utilize EO data for agricultural monitoring and decision-making, ensuring the long-term sustainability of the project outputs. These advancements illustrate how EO-based solutions, supported by robust capacity building and co-development, can enhance drought monitoring and risk assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Empowering Africa with Hyperspectral Data: Satellite Integration, Capacity Building, and Collaborative Research for Sustainable Agriculture

Authors: Jakub Dvořák, Petr Boháček
Affiliations: TRL Space
Satellite remote sensing technology is a cornerstone for research and development across Africa, providing critical data for addressing environmental, agricultural, and natural resource challenges. As a satellite integrator, TRL Space is committed to contributing to these efforts through its TRL Space Rwanda branch for local satellite manufacturing and collaborations with local communities, experts, and international partners. This contribution highlights key initiatives that demonstrate the value of international partnerships in harnessing the potential of remote sensing for sustainable growth across the continent. A central element of TRL Space’s engagement in Africa is capacity building, both in terms of satellite integration and hyperspectral data analysis. Through initiatives like our upcoming workshop at the Rwanda Institute for Conservation Agriculture (RICA), we aim to empower local stakeholders with the knowledge and tools needed to utilize remote sensing technology effectively. This workshop, conducted in collaboration with experts from Charles University, will focus on critical skills such as reference data collection, hyperspectral data acquisition, and data analysis using locally acquired UAV datasets. By bridging technical expertise with local knowledge, the program enhances regional capacity for leveraging advanced Earth observation (EO) technologies. In parallel, TRL Space collaborates closely with local partners on research activities, including crop mapping/monitoring in Rwanda. Utilizing data from our hyperspectral satellite TROLL, we provide high-resolution insights into crop patterns at a national scale. With its unique combination of ~4.75 m spatial resolution and 32 adjustable VNIR bands, TROLL enables precision agriculture applications and enhances climate resiliency. These efforts address key challenges in food security, sustainable land management, and environmental stewardship. The insights gained from these collaborations have the potential to extend beyond Rwanda, offering scalable solutions and shared expertise that can benefit the broader region. By integrating cutting-edge remote sensing technologies with local engagement and international expertise, these initiatives showcase the transformative power of cooperation in driving research and development across Africa.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High-Resolution AI-Driven Crop Segmentation in Nyeri County, Kenya: Enhancing Agricultural Monitoring Through Deep Learning

Authors: Anna Bartels, Marcus Goebel, Dr. Bartholomew Thiong’o Kuria, Jun.-Prof. Dr. Andreas Rienow
Affiliations: Ruhr-University Bochum, Dedan Kimathi University of Technology
Artificial Intelligence (AI) is advancing the potential of earth observation and geoinformation data. Deep learning methods are particularly widely used in land use and land cover mapping. They address challenges such as handling complex and high-dimensional input data, capturing spatial and temporal variability, automating feature extraction and classifying land cover classes. Accurate and continuous agricultural land use mapping is essential for sustainable land management, food security, and climate adaptation, particularly in regions where agriculture is a key economic driver. In Nyeri County, Kenya, agriculture drives the local economy and is characterised by heterogeneous and small-scale farming systems depending heavily on rainfall. Climate variability exacerbates planning uncertainties for farmers, making accurate, continuous monitoring of crop patterns critical for food security and climate adaptation strategies. With coffee and tea being valuable export products, additionally, political pressure on the traceability of these products grows. The EU law adopted in 2023 defines clear regulations for the certification of export products that must be complied with. This also includes the verification that no deforestation was practised. The presentation will introduce the implementation of a deep learning approach using high-resolution Jilin-1 satellite imagery to accurately map heterogeneous cash crops and capture, monitor and analyse the dynamics in land use and crop patterns for effective decision-making. A neural network architecture was created, trained and validated on ground truth data for land use in the Muringato sub-catchment area. The data were sampled during a field campaign in January 2023 and pre-processed to create train data containing crop masks and Jilin-1 satellite imagery patches with a spatial resolution of 0.5 m. Among different tested models, U-Net with its encoder-decoder architecture and skip connections, enhances the network’s ability to capture local and global features which is crucial for heterogeneous landscapes typical in the agriculture structure in Nyeri County. With an overall accuracy of 0.985 and an Intersection over Union of 0.973, the U-Net model generated precise segmentation masks, enabling automated identification of crop fields. These results highlight the model's robustness in heterogeneous landscapes, offering a foundation for real-time agricultural monitoring systems in regions where traditional methods are challenging due to access limitations or financial constraints. In terms of total area (8524 ha), large coffee plantations (346 ha) slightly exceed the smaller and scattered tea fields (329 ha) in the prediction mask for the study area in Nyeri. These results underscore the importance of precise crop mapping in distinguishing between key agricultural products, particularly in regions with diverse farming systems. This automated and scalable approach not only supports real-time agricultural monitoring but also facilitates the automation of workflows for mapping valuable export products and ensuring traceability, which is crucial for complying with evolving certification standards. The automation of the workflow also makes it adaptable for use with other satellite data sources, offering flexibility for various agricultural contexts. Looking ahead, European satellite systems, including ESA’s Earth observation missions, have the potential to enhance this approach further. By providing additional high-resolution satellite data, these systems can support the automation of crop mapping, improve traceability for valuable export products, and assist in meeting evolving certification requirements, such as those set by the EU for sustainable agricultural practices. The synergy between AI-driven analysis and European space-based assets offers great promise for advancing agricultural monitoring, improving traceability, and ensuring compliance with international sustainability standards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Pastoral Resilience in Northern Kenya through Integrated Use of Earth Observation and Local Knowledge

Authors: dr.ir. Anton Vrieling, Florian Ellsäßer, Claudia Paris, Luc Boerboom, Malumbo Chipofya, Alex Sayon Lengarite, Simon Piers Simpkin, Sake Godana Duba, Benjamin Loloju, Bob Sammy Munyoki Mwende, George Kinyanjui Ngugi, Clinton Ouma Ogola, Teyie Sharon
Affiliations: University of Twente, Faculty ITC, Mercy Corps Kenya, Save the Elephants, Directorate of Resource Surveys and Remote Sensing, National Drought Management Authority
Residents of northern Kenya's semi-arid rangelands face numerous challenges, including climate variability, land degradation, and resource conflicts. Their livelihoods are largely dependent on livestock with seasonal herd migration being a main mechanism to ascertain sufficient food and water intake for their animals. However, frequent droughts, flooding, erosion, disease outbreaks, and proliferation of non-palatable invasive species jeopardize the sustainable provision of sufficient levels of ecosystem services to meet the needs of the various pastoral groups, while armed conflict over scarce resources is common. To strengthen the livestock sector and enhance the sustainable management of rangelands in northern Kenya, the Embassy of the Kingdom of the Netherlands in Kenya is funding a 5-year (2024-2028) €15 million project called RANGE (Resilient Approaches in Natural ranGeland Ecosystems). The project targets Isiolo, Marsabit, and Samburu counties and is led by the non-governmental organization Mercy Corps in partnership with a) the Frontier Counties Development Council (FCDC), a regional economic bloc in Kenya composed of County governments, and b) the University of Twente’s Faculty of Geo-Information and Earth Observation (ITC). Recognizing the need that effective decision-making requires high-quality data on rangeland conditions and use, ITC’s contribution will focus on supporting and improving existing Earth observation solutions for obtaining such data, both leveraging on in-situ (sensors, surveys) and satellite-based sources. RANGE will also build capacity at county and sub-county level to enhance spatial planning through data collection and analysis. Beyond the consortium partners we engage in active collaboration with county governments, mandated governmental institutes (e.g. DRSRS, NDMA), local universities, research organizations, and conservancies. The RANGE project integrates capacity building with research, supporting six Kenyan PhD candidates (all listed as co-author) and nine MSc students. Their research will strengthen institutional partnerships and contribute to sustainable development in the region. Below, we outline the research focus of the six PhD candidates, working on interconnected topics: 1) To explore scalable technologies for assessing livestock dynamics with the aim to contribute to enhanced planning of rangeland utilization. In collaboration with candidate 2, the project will aim to establish a LoRaWAN network (Long Range Wide Area Network), building on previous efforts by the Northern Rangeland Trust (a membership organization of conservancies). LoRaWAN enables the transmission of small data over large distances, which will be used for livestock tracking. Additionally, high-resolution imagery from PlanetScope and Sentinel-2 will support spatial and temporal analysis of livestock enclosures, where animals are kept overnight for protection from wildlife and raiders. 2) LoRaWAN can support data transmission from a variety of sensors. A second candidate will focus on establishing a LoRaWAN-enabled sensor network to monitor weather, moisture, and forage conditions, providing better ground data and insights into the links between water availability and status of rangeland vegetation. Automated photointerpretation procedures, using data from phenoCams and existing transect surveys, will also be developed. These in-situ sources will be combined with satellite image time series to create more accurate assessments of forage conditions across large areas, supporting drought monitoring and insurance programs. 3) A third candidate will use existing household surveys collected monthly by NDMA to assess how precursor events may lead to impact; for example, how do climatic fluctuations (teleconnections) result in meteorological drought, which then affects agricultural and socio-economic conditions. This will involve analyzing the relationship between remote-sensing derived forage availability and household welfare indicators from the surveys. The insights gained will help design and test an improved drought forecasting system. 4) Understanding how drought affects pastoral resources requires identifying key areas for forage production and the timing of their use. Candidate 4 will map key dry- and wet-season grazing areas and herd migration patterns. This mapping will involve participatory input from communities and elders, and will assess how grazing areas have changed over time due to land tenure or climate shifts. The resulting spatial understanding will enhance satellite monitoring of forage scarcity and drought. 5) Multiple initiatives aim to improve the effectiveness of ecosystem services in northern Kenya’s rangelands, including support for grazing management planning, soil- and water conservation, and invasive species control. However, evidence on the effectiveness of these interventions is often lacking. For example, water conservation measures may unintentionally promote the proliferation of invasive species. Candidate 5 will collaborate with local communities to identify rangeland health indicators for assessing intervention success. The candidate aims to scale part of those indicators to remotely-sensed data to evaluate the long-term impacts of interventions over large areas. This work will provide recommendations for designing and scaling interventions across broader landscapes. 6) Improved data acquisition and analysis does not guarantee better decision making. Candidate 6 will develop a participatory regional planning tool aim at promoting sustainable economic investments. Following Kenya’s 2010 Constitution, which decentralized government power and empowered counties, counties are now required to make five-year County Integrated Development Plans. However, there is significant potential to better utilize spatial data (such as that collected by other five PhD candidates) to enhance spatial planning at both the county and sub-county levels. While the RANGE project will leave multiple challenges unresolved, such as scaling data collection across all three counties and integrating data effectively into decision-making to design climate-adaptive interventions, we are confident in the partnership model. By jointly executing research and development with multiple Kenyan organizations, the project has the potential to create lasting impact, providing services and insights that can enhance the resilience of pastoralists in the northern rangelands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EOCap4Africa – Earth Observation in Africa: Capacity building in the field of remote sensing for the conservation of ecosystems and their services.

Authors: Insa Otte, Prof. Dr. Emily Bennitt, Prof. Dr. Eric Forkuo, Prof. Dr. Jean-Paul Kibambe Lubamba, Dr. Ange Félix Nsanziyera, Dr. Janina Kleemann, Dr. Doris Klein, Dr Martin Wegmann, Michael Thiel
Affiliations: University of Würzburg, Okavango Research Institute, University of Botswana, Kwame Nkrumah University of Science and Technology (KNUST), University of Kinshasa, Institute of Applied Sciences (INES), Martin Luther University Halle-Wittenberg, German Aerospace Center (DLR) - Data Center (DFD)
Remote sensing is an important tool for recording landscape changes and creating a basis for the management of ecosystems and their services. This is particularly relevant for the African continent. Climate change, population growth, pollution and a growing demand for natural resources are leading to rapid landscape changes in many African countries and regions, often to the detriment of natural ecosystems. Wetlands, as one example of various ecosystems, provide valuable services for the local population by the supply of food and drinking water, protect against droughts and floods, and provide habitats for a large number of protected animal and plant species. Remote sensing technologies enable an inventory of ecosystems and thus create a basis for sustainable management, restoration and sustainable use. Besides the immense potential for the use of remote sensing in wetland management in Africa there is still a need for capacity development to further utilize the available technologies for management and recovery. The aim of the project “EOCap4Africa” is to strengthen the capacities of future conservation managers in the application of information generated by remote sensing for the protection and sustainable use of ecosystems, with a focus on wetlands and their services. We developed a curricular for a Master’s remote sensing module in close cooperation with our African partners from the university sector. The idea is to spread knowledge about the potential of remote sensing data via students of relevant courses and to increase its application in the medium term. In addition, the project is pursuing an approach in which senior and junior scientists and practitioners are integrated into the EO work at the African partners. On the one hand, this is intended to ensure professional excellence in the development of the module, on the other hand, the capacities of young scientists are increased, and the exchange of knowledge and experience is promoted. In EOCap4Africa we cooperate with four partner institutions in Africa, namely the Kwame Nkrumah University of Science and Technology in Kumasi (Ghana), the University of Kinshasa (DR Congo), the Institute of Applied Science in Ruhengeri (INES; Rwanda) and the University of Botswana in Gaborone/Maun (Botswana).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating air pollution and climate change on the African continent

Authors: Pieternel Levelt, Prof. dr. Eloise Marais, dr. Helen Worden, dr. Wenfu Tang, dr David Edwards, Henk Eskes, Dr Pepijn Veefkind, dr Steve Brown, dr Collins Gameli Hodoli, dr Allison Hughes, dr Barry Lefer, dr Drobot Sheldon, Associate Research Professor Dan Westervelt
Affiliations: Nsf Ncar, KNMI, TU Delft, University College London, NOAA CSL, Department of Built Environment at the University of Environment and Sustainable Development (UESD), Clean Air One Atmosphere, Department of Physics, School of Physical and Mathematical Sciences College of Basic and Applied Sciences, University of Ghana, NASA HQ, Space & Mission Systems BAE Systems, Inc, Colombia University
In the next few decades a large increase in population is expected to occur on the African continent, leading to a doubling of the current population, which will reach 2.5 billion by 2050. At the same time, Africa is experiencing substantial economic growth. As a result, air pollution and greenhouse gas emissions will increase considerably with significant health impacts to people in Africa. In the decades ahead, Africa’s contribution to climate change and air pollution will become increasingly important. The time has come to determine the evolving role of Africa in global environmental change. We are building an Atmospheric Composition Virtual Constellation, as envisioned by the Committee on Earth Observation Satellites (CEOS), by adding to our polar satellites, geostationary satellites in the Northern Hemisphere : GEMS over Asia (launch 2022); TEMPO over the USA (launch 2023) and Sentinel 4 over Europe to be launched in the 2025 timeframe. However, there are currently no geostationary satellites envisioned over Africa and South-America, where we expect the largest increase in emissions in the decades to come. In the recent ACVC CEOS meeting extending the GEO constellation over the Global South was positively received. In this paper the scientific need for geostationary satellite measurements over Africa will be described, partly based on several recent research achievements related to Africa using space observations and modeling approaches, as well as first assessments using the GEMS data over Asia, and TEMPO over the USA. Our ambition is to develop an integrated community effort to better characterize air quality and climate-related processes on the African continent.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Forecasting Agricultural Drought Impact in Africa through Machine Learning and Earth Observation

Authors: Koen De Vos, Sarah Gebruers, Jeroen Degerickx, Marian-Daniel Iordache, Jessica Keune, Francesca Di Giuseppe, Hendrik Wouters, Francesco Pereira, Else Swinnen, Koen Van Rossum, Laurent Tits
Affiliations: VITO, European Centre for Medium Weather Forecasts (ECMWF)
Agricultural systems across Africa are increasingly becoming vulnerable to the impacts of climate change, including prolonged and severe droughts or dry spells, which threaten food security, economic stability, and social resilience. Simultaneous with a rising demand for food due to a growing population, agriculture must adapt to a growing frequency in extreme weather conditions, while also minimizing its environmental footprint and ensuring sustainable resource use. The capacity to predict and manage agricultural droughts before they affect crops and livestock is therefore crucial for safeguarding food systems and the livelihood of those relying on food production for income. In response to these challenges, this study presents a model that combines Earth Observation (EO) with machine learning (ML) techniques to estimate lower-tail anomalies in the Normalized Difference Vegetation Index (NDVI)—a key indicator of vegetation health used to monitor agricultural drought impacts. Focusing on croplands and grasslands in Mali, Mozambique, and Somalia, we developed a zone-based system that integrates near real-time satellite data with meteorological (re-)forecasts into a gradient-boosted autoregressive model. This approach allows for the prediction of NDVI anomalies up to three months in advance, offering valuable lead time for decision-making and resource allocation in drought-prone areas. Our model does this by combining environmental information such as soil moisture, elevation, soil texture with information on meteorological droughts (e.g., SPI and SPEI), alongside phenological and land cover data to better represent the expected impact. This integration improves predictive accuracy over conventional near real-time NDVI monitoring, substantially reducing the root mean square error (RMSE) across different time horizons (10 days, 1 month, and 3 months ahead). These advancements represent a critical step towards transitioning from reactive monitoring of agricultural impact to proactive forecasting systems. By allowing for early assessment of agricultural drought impacts, our study supports the development of informed drought management strategies, helping stakeholders—from policymakers to farmers— to make timely interventions. This capability is particularly important for regions reliant on rainfed agriculture, where the consequences of delayed drought responses are often severe. Additionally, the model’s integration of meteorological ensemble forecasts offers uncertainty quantification, further enhancing its value in risk assessment and resource planning. As such, this study contributes to ongoing efforts to build more resilient agricultural systems, aligning with global initiatives like GEOGLAM or FEWSNET that aim to enhance food security through advanced monitoring and forecasting.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Sugarcane Stress Detection with Hyperspectral and Thermal Data: Insights from the PRISMA4AFRICA Project

Authors: Dr Roberta Bruno, Dr Raffaele Casa, Dr Francesca Fratarcangeli, Dr Saham Mirzaei, Francesco Palazzo, Dr Simone Pascucci, Dr Stefano Pignatti, Dr Nitesh Poona, Dr Chiara Pratola, Dr Zoltan Szantoi, Dr Alessia Tricomi
Affiliations: e-GEOS S.p.A, Department of Agriculture and Forestry Sciences (DAFNE), University of Tuscia, Institute of Methodologies for Environmental Analysis (IMAA)- Italian National Research Council (CNR), Serco S.P.A., South African Sugarcane Research Institute (SASRI), Science, Applications & Climate Department, European Space Agency (ESA)
The PRISMA4AFRICA project aims to establish a partnership between African and European organizations to advance the adoption and use of Earth Observation (EO) technologies for precision farming and food security. This initiative is designed to address user needs while leveraging the opportunities and challenges offered by recent EO data processing and modelling. In the framework of this project, we develop and disseminate tools based on thermal and hyperspectral EO data to detect plant stress, with a particular focus on stresses impacting sugarcane plantations. Sugarcane is widely cultivated in all the collaborating countries—Gabon, Mozambique, and South Africa—which are represented by AGEOS, INIR, IIAM, and SASRI, namely African Early Adopters (AEA). The SASRI team has identified key stress factors affecting sugarcane, including yellow sugarcane aphid infestations and Eldana damages, while INIR is particularly interested in water stress. This collaboration is fundamental for validating the products using in-situ data collected quasi real-time with the hyperspectral acquisitions. To this end, an online training session was organized to share the theory and practice of data collection with African collaborators. Hyperspectral and thermal data, generally less exploited compared to multispectral data due to their complexity and current limitations in terms of revisit time, offer instead significant opportunities because enabling the retrieval of (a) crop biochemical and biophysical vegetation parameters (e.g., LAI, FAPAR, LCC/CCC and CWC), (b) soil properties (e.g., soil organic carbon - SOC), and (c) evapotranspiration (ET) and Evaporative Stress Index (ESI). The goal of the joint activity is to generate stress maps by combining outputs from different processing and modelling used as input PRISMA (ASI Italian mission) and EnMAP (DLR German program) for the hyperspectral (0.4-2.5 μm) dataset and the ECOSTRESS (Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station on the ISS) and Landsat-8/9 for the thermal ones (8-12 µm). Crop biochemical and biophysical parameters retrieval was achieved through a hybrid approach. The radiative transfer model PROSAIL was used to generate a training dataset encompassing different illumination and geometry configurations, which is then applied to train Machine Learning Regression models (tree-based models and Gaussian Process Regression - GPR models, depending on the target variables). These models have been validated in a different country, achieving promising results: RMSE = 0.38 m²/m², R² = 0.82 for LAI; RMSE = 0.093, R² = 0.805 for FAPAR; RMSE = 0.019 g/cm², R² = 0.77 for CWC; and RMSE = 0.38 µg/cm², R² = 0.695 for Chlorophyll. Whereas SOC was estimated using a 1-Dimensional Convolutional Neural Network (1D-CNN), trained on an extensive global PRISMA dataset, combined with SOC values from the ICRAF and KKSL (https://soilspectroscopy.org/) spectral libraries. Transfer learning was subsequently applied to refine retrieval processes for the specific areas of interest. A preliminary test of the methodology was conducted in South Africa, yielding an R² value of 0.47. ECOSTRESS L3/L4 standard products were exploited to extract information about the ET and water stress. Unfortunately, these data are not always produced because of the lack of ancillary layers required by the ECOSTRESS processing chain for the ET calculation. For this purpose, an ad hoc workflow was set up to derive both albedo and LAI directly using PRISMA images. To improve the spatial resolution, the Data Mining Sharpener (DMS) algorithm was successfully applied to the sharpening of the LST products using PRISMA-derived 30m-NDVI While crop vegetation parameters and soil properties will be validated using in situ data collected by the AEA, the absence of Eddy Covariance or ET stations within the study areas will limit the evaluation of the retrieved ET/ESI to a qualitative assessment. So, this evaluation will involve comparisons with WaPor FAO portal values (https://data.apps.fao.org/wapor) or cross-validation against data from better-instrumented reference sites. Preliminary results clearly show that the 30m ET products derived by combining PRISMA and ECOSTRESS using the Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) algorithm are of good quality in terms of dynamic range and spatial pattern. The PRISMA-ECOSTRESS ET product shows a high correlation with ESA-STIC and NASA-PT-JPL products and a RMSE of 32 and 19 W/m², respectively. To conclude, this study evaluates the potential of thermal and hyperspectral data for detecting stress and damages in crop fields like sugarcane. By integrating these advanced technologies, it becomes possible to provide critical insights to enhance the resilience of plantations against various stressors and to contribute to food security efforts. Furthermore, with the upcoming hyperspectral missions like CHIME (ESA) and SBG (NASA), as well as thermal missions such as LSTM (ESA), SBG-TIR (NASA), and TRISHNA (ISRO), these tools pave the way for the development of an operational monitoring system in the framework of precision farming and food security.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing EO Maturity in Sub-Saharan Africa

Authors: Irena Drakopoulou, Lefteris Mamais, Peter Zeil, Cecilia
Affiliations: Evenflow
Africa is experiencing significant economic growth in recent years and a significant increase in innovation output. For this growth to be sustainably achieved in the face of a host of challenges, informed decisions need to be made. Here EO has a major role to play, informing policy makers and supporting innovators across a wide range of sectors. For all this to materialise however, it is vital that African countries have a solid understanding of their current capabilities and maturity readiness in terms of EO. The EO Maturity Indicators (EOMI) framework, a proven and structured tool for tracking progress and identifying areas for growth in the EO sector, was utilized as part of the ongoing collaboration between the European Commission’s Directorate General for International Partnerships (DG INTPA) and the African Union. This framework, previously employed in similar contexts in several countries in Europe, North Africa, Middle East and South-East Asia, enables a comprehensive evaluation of EO maturity levels. To that end, our research assessed the EO landscape in nine Sub-Saharan African (SSA) countries: South Africa, Nigeria, Kenya, Rwanda, Gabon, Ivory Coast, Tanzania, Namibia, and Botswana. The study’s primary aim was to provide a clear and comprehensive understanding of the space sector’s status in these countries, with the goal of fostering EO-driven innovation and sustainable development. Using the tailored EO Maturity Indicators methodology, the research evaluated three fundamental pillars of the space sector: ecosystem capacity, infrastructure, and policy frameworks. A mixed-methods approach was adopted, combining stakeholder consultations, in-depth desk research, and validation by national experts to ensure context-specific and accurate insights. The findings for each of the indicators revealed a diverse range of EO maturity levels across the region. South Africa and Nigeria emerged as regional leaders, showcasing strong policy frameworks, active academic ecosystems, and successful international collaborations that have significantly advanced their EO sectors. Other countries, such as Tanzania, Namibia, and Botswana, are at earlier stages of development. While they face several challenges they also present opportunities for targeted investments and capacity building, which can unlock their future potential to fully leverage EO for national and regional benefits. Despite these disparities, the study identified promising opportunities for growth. Existing regional collaborations, international partnerships, and academic and training programs represent key strengths that can be further scaled. EO applications in critical areas such as agriculture, disaster management, urban planning, and climate monitoring underscore the potential for transformative socio-economic impact. Targeted investment in these applications can drive sustainable development and innovation across the region. To address the challenges, the study offers key recommendations, including enhancing funding mechanisms, fostering cross-border partnerships, and investing in capacity building at institutional and individual levels. Strengthening support for small and medium enterprises (SMEs) and closing gaps in research and innovation are equally critical. Furthermore, improving policy frameworks and infrastructure is essential to developing a resilient and competitive EO sector capable of meeting the diverse needs of Sub Saharan Africa (SSA). This research underscores the strategic importance of Earth Observation in promoting sustainable growth, environmental stewardship, and improved livelihoods across Africa. By fostering regional and international cooperation, the EO sector can catalyze innovation and economic development while addressing global challenges. The findings of this study provide a roadmap for policymakers, industry leaders, and academic stakeholders to align their efforts and fully realize the transformative potential of EO for the region. In that regard, the study has already informed upcoming investments by the European Commission in the context of EU/Africa space flagship programme, as well as country-specific action plans, such as those under development in Kenya. Finally, as widely recognized by involved stakeholders at country and regional level, the study provided them with an invaluable opportunity to further learn about each other’s activities in a structured way and drive positive developments going forward.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Perspectives on Critical Remote Sensing and Mixed Methods for Development Studies in Africa. Assessing the Land Dynamics of Middle Scale Farms in the Nacala Corridor, Mozambique.

Authors: Ricardo Gellert Paris, Jun.Prof. Dr. Andreas Rienow
Affiliations: Ruhr University Bochum
Remote sensing methods, such as satellite imagery and time-series analysis, have become indispensable in data-poor contexts, such as for understanding Africa’s infrastructural, agricultural, and socio-spatial dynamics. While these tools are often celebrated for their objectivity and scalability, critical perspectives highlight their potential to reinforce dominant societal metanarratives about modernization and development. Instead, we argue for approaches that enable satellite imagery to contest these dominant discourses, fostering political debate rather than foreclosing it and opening pathways for more inclusive and participatory narratives. Our research aims to investigate the socio-political potential of critical remote sensing applications through its integration with ethnographic and field-based methods, emphasizing the need for context-driven and participatory approaches. The Nacala Development Corridor is the major infrastructural project in Mozambique, connecting mining extraction sites to a newly built port. Alongside the logistic infrastructure, the government and international agencies implemented development projects to foster the agricultural sector, including land titling and technology transfer. To investigate these dynamics, we were confronted by two issues: i) the multiple spatial and temporal scales of territorial changes and ii) conflicting perceptions of autonomy and dependency among peasants and middle-scale farmers, reflecting broader tensions in resource access and control. To bridge physical and social changes driven by mega infrastructural projects, we utilized remote sensing time series to identify the expansion of agricultural operations and land-use heterogeneity in disputed territories. Specifically, we analyzed 10-year Sentinel-2 images alongside Rainfall Estimation (CHIRPS) and Topographic data (STRM) to assess land cover changes over time. Also, georeferenced field data further contextualized these observations. We conducted a six-month fieldwork residency in collaboration with local research institutions, immersing ourselves in towns and villages along the Nacala Corridor. This immersive approach enabled us to build trust and foster meaningful engagement with local communities. Our methods included in-depth interviews with a diverse range of stakeholders and transect walks through selected study areas, allowing us to traverse the boundaries between commercial and family farming operations. This combination of techniques provided nuanced insights into the socio-spatial dynamics and power relations shaping land use and livelihoods along the Corridor. By intersecting remote sensing data with on-the-ground narratives and experiences, we question not only the spatial dynamic of agriculture intensification but who has access to the benefits and how they are shared among local actors. Stable production and the mitigation of risks related to weather unpredictability, we argue, rely on access to natural resources and technology. A condition applied by middle scale farms, for example, by reaching up to 180% increase in median NDVI values after the implementation of mechanized irrigation. However, this access is often mediated by established power structures that favour certain groups over others. By critically assessing the strengths and limitations of combining remote sensing with ethnographic fieldwork, this research reframes remote sensing as more than a neutral tool for data extraction. Instead, it demonstrates its potential as a platform to amplify marginalized voices, challenge inequities, and contribute to more inclusive and sustainable development practices.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: FAO PLAN-T: Advancing Climate Adaptation for Maize Cultivation in Zambia with Innovative Tools and Methodologies for Better Decision-Making

Authors: Dr. Ramiro Marco Figuera, Kimani Bellotto, Melih Uraz, Giulio Genova, Marco Venturini, Alessandro Moser, Marcello Petitta, Dr. Sandra Corsi, Michela Corvino, Zoltan Szantoi
Affiliations: SISTEMA GmbH, AMIGO Climate, FAO, ESA
The FAO PLAN-T project is an initiative aimed at improving climate adaptation strategies for maize cultivation in Zambia by leveraging advanced climate and agronomic data. The project focuses on assessing crop yield potential for nine maize varieties across the country, utilizing high-resolution ERA5Land climate data and FAO WaPOR, to evaluate climate variables critical to maize growth, in combination with soil water retention characteristics, fertility and salinity. With a detailed spatial resolution of 250m per pixel, the project assesses maize yield on a pixel-by-pixel basis, providing localized insights to farmers and policymakers. The FAO's AquaCrop model is a key component of the project, utilizing precipitation, air temperature, evapotranspiration, soil water retention characteristics, soil fertility, and soil salinity to assess crop performance. For each pixel, AquaCrop examines three climate scenarios—dry, average, and wet—and computes the mean to determine optimal sowing dates and maximum yields for each maize variety. The model generates detailed maps that display expected crop yields and recommend sowing dates, providing guidance on the best planting times to maximize yield potential. An interactive web-based application enables users to select specific locations and dates to assess suitability for planting based on real-time and forecast soil moisture and water retention data. This decision-support tool allows farmers to adjust planting decisions based on current soil and moisture conditions and high resolution weather forecast (up to 10 days) from ECMWF, improving resilience to climatic fluctuations. To further support farmers, the project has developed an innovative module that quantifies risks from extreme climate events during maize growth stages. This module evaluates the impact of climate stressors on each phenological phase of maize growth, from planting to the latest growth stage. Working in near-real time, it enables farmers to predict potential yield losses and implement timely adaptation measures, improving decision-making capabilities in response to climate variability. Moreover, it allows the identification of the prevailing climate baseline (dry, average or wet) throughout the growth season and the detection of any emerging climate scenarios. The FAO PLAN-T project represents a significant advancement in agricultural decision-support tools, providing localized, data-driven insights that empower Zambian farmers to optimize maize yields and build climate resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Earth Observation-Based Characterization of Social-Ecological Systems in the Kavango-Zambezi Transfrontier Conservation Area

Authors: Achim Röder, M.Sc. Jari Mahler, M.Sc. Chidinma Akah, M.Sc. Henrike Dierkes, M.Sc. Aaron Nikultsev, JProf. Dr. David Frantz, Prof. Dr. Richard Fynn, Dr. Stephanie Domptail, M.Sc. Sakeus Kadhikwa, Prof. Dr. Jonathan Kamwi, Prof. Dr. Nichola Knox, Prof. Dr. Vincent R. Nyirenda, Dr. Antonio Chipita
Affiliations: Trier University, Environmental Remote Sensing and Geoinformaticsensing and Geoinformatics, Trier University, Geoinformatics - Spatial Data Science, University of Botswana, Okavango Research Institute, University of Gießen, Institute for Agricultural Policy and Market Research, Namibia University of Science and Technology, Dep. Of Geo-Spatial Sciences and Technology,, The Copperbelt University, School of Natural Resources, Associacao de Conservacio do Ambiente Desenvolvimento Integrado Rural
The Kavango-Zambezi Transfrontier Conservation Area (KaZa-TFCA), established in 2012 spans 5 countries and is one of the largest transboundary protected areas worldwide, including iconic national parks such as the Okavango Delta or Kafue National Park. It is a network of protected areas, with the goal of leveraging biodiversity conservation while at the same time sustainably managing the Kavango Zambezi ecosystem and supporting livelihoods of its 3 million people. The vast majority of these livelihoods depends on small-scale farming, often in highly dynamic setups including a combination of shifting cultivation and horticulture. Market integration is very limited, and more intensive (irrigation) agriculture only plays a role in some parts of Zambia. Conflicts over resources, on-going human wildlife conflicts and failing conservation and development objectives jeopardize the effectiveness of KAZA. The SASSCAL-II project ELNAC (Enhanced Livelihoods and Natural Resource Management under Accelerated Climate Change – A Large Landscape a Large Landscape Social-Ecological Systems Approach) addresses these issues by bringing together ecological, socio-economic and geoinformatics research with diverse stakeholders to support the implementation of community based natural resource management (CBNRM) concepts as an efficient means of empowering local communities. It is based on the assumption that the current conservation paradigm often stigmatises local communities as degrading agents and excludes them from managing the natural resources surrounding them. This in turn may compromise conservation outcomes, since disenfranchised communities will resist conservation objectives. While CBNRM is mostly local, one goal of ELNAC`s earth observation component is to develop data products for use in diverse applications at different scales. This is essential, since understanding and managing socio-ecological systems requires a robust data foundation. In this context, earth observation plays a critical role in bridging knowledge gaps by providing consistent, large-scale data that can complement local insights. By integrating advanced remote sensing technologies, ELNAC enhances the ability to monitor ecological and land-use changes, enabling informed decision-making at both community and regional levels. The FORCE (Framework for Operational Correction for Environmental Monitoring) was utilized to establish a Landsat and Sentinel-2 based datacube for the entire region. After radiometric and topographic processing, all images were organized in a consistent tiling structure, amounting to ~3 Mio images within ~3500 30x30km2 tiles. These include surface reflectance, cloud and cloud shadow mask and other auxiliary data. It supported development of different level-3 products, such as land surface phenology metrics (LSP), spectral-temporal metrics (STM) and best available pixel composites (BAP), which form the basis for a wide range of analyses. Our analytical framework consists of three components to characterize the social-ecological system of the region: i) vegetation structure as a key resource for livelihoods and wildlife; ii) a conceptual look at the large scale transformation frontiers and related patterns; and iii) production and fallow cycles in agricultural systems. To map vegetation structure, data from the Global Ecosystem Dynamics Investigation (GEDI) was used in combination with land surface phenology and spectral temporal metrics derived from Sentinel-2 and extensive field work. Then random forest models were used to produce maps of canopy cover, height and foliage height diversity across the Okavango Delta. Despite GEDI being developed for forest systems, we found results to also well represent structures in the flooded grasslands of the Delta region. To characterize agricultural systems, different mapping approaches were implemented based on Landsat time series, since these allow evaluating the time span between 1990 and 2023, while earlier years were discarded due to a shortage of data. Analysis of land use and land cover conversions made use of the frontier metric concept developed for Southern America (Baumann et al. 2023, Environmental Research Letters), which we adapted to reflect the finer texture of processes found in the KAZA region. We found six metrics to well represent the transformation patterns in the region: year of deforestation onset, speed, diffusion and activity of deforestation, and land use after deforestation. In contrast to other parts of the world, leapfrogging of the deforestation frontier was not found to play a major role, while regularly progression frontiers dominated. Improving agricultural practices and mitigating human-wildlife-conflicts is one of the most important objectives of KAZA, necessitating monitoring the effectiveness of implemented measures and the suggestion of improved resource use. Thus, besides mapping and characterizing agriculture-related deforestation, it is of equal importance to better understand the dynamics within agricultural systems. Using dry-season STMs we mapped major land use and land cover categories for the 1990 to 2023 period, and then applied the LandTrendR time series algorithm (Kennedy et al. 2010, Remote Sensing of Environment) to the cropland probabilities to smoothen and segment the time series. These segments were then linked to the major production phases in small-scale agricultural systems, and - complemented by extensive surveys carried out in communities in Namibia, Zambia and Angola - we found much higher dynamics than traditionally assumed in many areas. These manifest in short and more frequent cropping-fallow cycles on existing fields and the gradual development of new cropping areas. It is particularly noteworthy that often these also extend into areas assigned to other uses in regional land use zoning schemes, casting doubt on the effectiveness of regional land use planning.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite observations for supporting air quality monitoring in East Africa

Authors: Anu-Maija Sundström, Pie Celestin Hakizimana, Deborah Nibagwire, Dessydery Mngao, Katja Lovén, Henrik Virta, Iolanda Ialongo, Seppo Hassinen
Affiliations: Finnish Meteorological Institute, Rwanda Environment Management Authority, Tanzanian Meteorological Authority
Significant advancements in space-based atmospheric composition monitoring have created new opportunities to utilize satellite data in various societal applications, such as supporting air quality monitoring or assessing impacts of air pollution on public health. To fully leverage the potential of satellite observations, active collaboration between the scientific community and stakeholders is essential. The role of satellite observations in supporting air quality monitoring is especially valuable in Africa, where rapidly growing cities face increasing air pollution levels but ground-based air quality measurements are often very limited or unavailable. The Finnish Meteorological Institute’s project FINKERAT funded by the Ministry for Foreign Affairs of Finland aims to increase East African societies’ preparedness for extreme weather events and to improve air quality monitoring in Kenya, Rwanda and Tanzania. Satellite observations play a key role in this project to support the assessment of air quality in each country, by providing information on various air quality related parameters, especially over those areas where ground-based observations are not available. Satellite observations of aerosols, fires, and trace gases provide valuable information on emission hotspots, seasonal pollutant variations, and long-term trends over East Africa. Special focus has been paid on aerosol observations as the main pollutant in the area is often particulate matter. In this work the main outcomes of the long-term satellite observation analysis are presented, and how these observations can be used to support air quality monitoring. Capacity building is also an essential part of the FINKERAT project, and several hands-on training sessions on satellite data analysis have been organized by the FMI in Kigali, Nairobi, and Dar Es Salaam as well as in Helsinki.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.08.01 - POSTER- Advances in Swath Altimetry

The NASA and CNES Surface Water and Ocean Topography (SWOT) Mission, launched in December 2022, is the first inflight experience in orbit of a swath altimeter. The SWOT mission has revealed the capability of swath altimeters to measure ocean and inland water topography measurements in an unprecedented manner. The onboard Ka-band interferometer (KaRIn) observes wide-swath sea surface height (SSH) with a sub-centimetre error. It is already unveiling the small mesoscale ocean circulation that is missing from current satellite altimetry. SWOT already carried a campaign for the satellite calibration and validation (Cal/Val) including ground truths and airborne campaigns.
ESA’s Sentinel-3 Next Generation Topography (S3NGT) mission is being designed as a pair of two large spacecrafts carrying nadir looking synthetic aperture radar (SAR) altimeters and across-track interferometers, enabling a total swath of 120 km, in addition to a three-beam radiometer for wet tropospheric correction across the swath, and a highly performant POD and AOCS suite.
With a tentative launch date of 2032, the S3NGT mission will provide enhanced continuity to the altimetry component of the current Sentinel-3 constellation, with open ocean, coastal zones, hydrology, sea ice and land ice, all as primary objectives of the mission.
This session is dedicated to the presentation of advances in swath altimetry - including airborne campaigns- and the application of swath altimetry to the primary objectives of the mission, i.e. open ocean and coastal processes observation, hydrology, sea ice and land ice. We also invite submissions for investigations that extend beyond these primary objectives, such as the analysis of ocean wave spectra, internal waves, geostrophic currents, and air-sea interaction phenomena within swath altimeter data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: On the assessment of swath altimetry spectral requirements: lessons learned from the SWOT Cal/Val phase

Authors: Francesco Nencioli, Matthias Raynal, Clément Ubelmann, Emeline Cadier, Pierre Prandi, Gerald Dibarboure
Affiliations: Collecte Localistation Satellites, Centre National d'Etudes Spatiales, Datlas
The launch of the Surface Water and Topography Mission (SWOT) on 16 December 2022 opened the era of wide-swath altimetry. One of the main objectives of SWOT new Ka-band radar interferometer (KarIn) is to resolve two-dimensional sea surface height signals over a swath of 120 km and down to 15-30 km wavelengths, well below those that can be observed from the current nadir altimeter constellation. Reaching this objective required a main paradigm shift in terms of mission error requirements which, for the first time, were defined in a spectral form. Cal/Val activities performed during the first two years of the SWOT mission proved that assessing such requirements is a particularly challenging task, since it demands simultaneous and colocalized independent SSH observations over up to 1000 km along the SWOT swath. SWOT Cal/Val activities included various approaches based on comparisons with in-situ measurements (e.g. dedicated mooring array and airborne lidar) as well as with remote sensing observations. Here we will mostly focus on the comparison with SWOT Nadir and Sentinel-3 SRAL sea surface observations. Despite the coarser resolution and the higher noise levels of those along-track nadir observations, our comparison showed very good overall KarIn performances: specifically, a length of minimum-resolvable scales reduced by ten-folds and error magnitudes below the measured signal at all scales. Furthermore, SWOT observations seem to indicate that below 100 km ocean processes are more energetic than what could be inferred from traditional nadir altimeters, implying that mission requirements, as currently defined, are likely too stringent at those scales. Despite the good results, these first analyses evidenced non-negligible limitations associated with each approach. The largest source of uncertainty of each approach comes from the lack of reference measurements that are exactly simultaneaous and colocalized with the SWOT ones. Because of that, natural ocean variability (spatial and/or temporal) must be accounted for when estimating SWOT error spectra. Estimating such contribution is non-trivial, especially down to the small scales observed by SWOT. Observations from the initial “fast-repeating” phase of the mission (1-day repeat orbit) proved to be extremely important to assess the natural ocean variability for the satellite-based approaches. The daily-repeating observations were also extremely valuable for the in-situ-based approaches, since they allowed the error spectrum to be reconstructed by averaging several realizations of otherwise noisy individual spectra . Overall, evaluating the error spectra at scales smaller than 100 remained a challenging task for satellite-based approaches. Those scales are poorly resolved by Nadir observations and, although partially resolved by SAR altimetry, are characterized by fast decorrelation scales in both space and time, making them particularly elusive to methods relying on spectral differences. Results from in-situ experiments represent an important complementary source of information at those scales, even though the short duration and localized spatial extent of field campaigns limit the frequency range, resolution and accuracy at which the error spectra can be estimated. All these lessons learned are particularly relevant in the perspective of future swath altimetry missions (such as Sentinel-3G NG) and they should be taken into consideration when defining future mission spectral requirements and how they are assessed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A CNN-Based Approach for Improving SWOT-Derived Sea Level Observations Using Drifter Velocities

Authors: Sarah Asdar, Bruno Buongiorno
Affiliations: CNR - Istituto di Scienze Marine
Satellite altimetry has fundamentally transformed our understanding of ocean dynamics by providing extensive coverage of sea surface height (SSH) data. The launch of the Surface Water and Ocean Topography (SWOT) mission on December 16, 2022, marked a significant milestone, offering unprecedented spatial resolution of sea surface height anomalies (SLA). With the ability to resolve features down to scales of 15–20 km, SWOT captures fine-scale ocean processes, including internal waves and tides, which can dominate the signal at these smaller scales. However, at smaller scales, the geostrophic approximation becomes less valid as other dynamics become more dominant (e.g., nonlinear advection and ageostrophic motions), posing challenges for deriving geostrophic velocities that assume a balance between the Coriolis force and pressure gradients—a condition typically applicable to large-scale flows. SWOT data are further impacted by instrumental noise and processing artefacts, which disproportionately affect smaller spatial scales, as well as the aliasing of high-frequency signals, such as tides and inertial oscillations. These factors necessitate robust filtering techniques to isolate low-frequency geostrophic flows. Moreover, deriving geostrophic velocities requires taking spatial derivatives of SSH, a process that amplifies high-frequency noise and underscores the need for effective smoothing strategies to reduce this amplification and ensure reliable velocity estimates. To address these limitations and leverage advancements in machine learning, we developed a convolutional neural network (CNN)-based filtering technique to enhance the accuracy of satellite-derived sea level data. CNNs, known for their efficacy in capturing complex spatial patterns, form the backbone of our methodology. Our primary goal is to reduce the error between geostrophic velocities calculated from SWOT SLA and in-situ velocity measurements from drifters (from the Global Drifter Program), thereby generating refined sea level maps as outputs. A key innovation of our approach lies in the custom loss function developed for the CNN model, explicitly tailored to minimize velocity discrepancies. By integrating drifter data to constrain the satellite-derived velocities, our approach ensures that the resulting sea level fields are more representative of actual oceanic conditions. This strategy not only improves SWOT-derived observations but also addresses longstanding challenges in remote sensing-based oceanography. By combining satellite capabilities with advanced machine learning techniques, we present a powerful framework for improving SWOT-derived observations globally. This work provides a clear example of how machine learning can address critical challenges in oceanography, advancing our capacity to monitor and understand global ocean circulation dynamics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Exploring the Capabilities of SWOT KaRIn for Monitoring Lake Ice and Snow Depth

Authors: Jaya Sree Mugunthan, Claude Duguay, Dr Benjamin M Jones, Justin Murfitt, Elena Zakharova
Affiliations: H2O Geomatics, University of Waterloo, Hydro-EO, University of Alaska Fairbanks, EOLA
Lakes are a vital component of the Earth’s hydrological and climate systems. As highly sensitive indicators of climate change, lakes are classified by the Global Climate Observing System (GCOS) as an essential climate variable (ECV), with lake ice cover (LIC) and lake ice thickness (LIT) being two of its thematic products. In northern high-latitude regions, where lakes cover a significant portion of the landscape, the presence/absence of LIC and its thickness influence local/regional weather patterns, climate dynamics, hydrological processes, permafrost conditions, transport between northern communities, recreational activities, and tourism. Given their importance, accurate and frequent monitoring of LIC and LIT is critical. However, there has been a significant decline in field measurements of lake ice and overlying snow properties over recent decades. This reduction underscores the need for alternative monitoring approaches. Additionally, there has been a long-standing need for retrieving snow properties overlying lake ice, namely snow depth and snow mass (the product of snow depth and snow density), to improve the simulation of LIT from lake models used in standalone mode or as lake parameterization schemes in numerical weather forecasting and climate models. Despite advancements in the retrieval of LIC and LIT from optical and microwave (Ku- to L-band) satellite remote sensing data, including radar altimetry, the sensitivity of Ka-band observations to snow-covered lake ice remains largely unexplored. With the high-resolution wide-swath altimetry measurements provided by the Surface Water and Ocean Topography (SWOT) mission’s novel Ka-band Radar Interferometer (KaRIn), the above-mentioned knowledge gap could be addressed. This study builds on our prior work where we demonstrated the sensitivity of SWOT KaRIn signals to lake ice and overlying snow during the Cal/Val period, focusing on Teshekpuk Lake in Alaska. This period, however, was limited to late ice growth and break-up phases. In the current study, we extend the temporal scope to include both the Cal/Val and Scientific Phases, thereby capturing the complete ice phenology cycle—from initial freeze-up to the transition to ice-free conditions. Besides Teshekpuk Lake, this study also investigates the Dettah Ice Road, a critical winter transportation route connecting communities in Canada’s Northwest Territories. By exploring KaRIn-derived parameters including height and backscatter, we further investigate spatio-temporal patterns observed during the ice phenology period with a focus on snow accumulation and underlying surface ice properties. To better understand and support the KaRIn results, we use complementary satellite, meteorological station and field campaign data. The outcomes of this study will benefit researchers working on the estimation of lake water level, LIC, and LIT from radar altimetry and numerical lake models. Keywords: SWOT, lakes, lake ice, snow, wide-swath altimetry
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SWOT-KaRIn Level-3 and Level-4 Algorithms and Products Overview

Authors: Cécile Anadon, Anaëlle Treboutte, Robin Chevrier, Antoine Delepoulle, Clément Ubelmann, Maxime Ballarotta, Marie-Isabelle Pujol, Gérald Dibarboure
Affiliations: Collecte Localisation Satellites (CLS), Datlas, Centre National d'Etudes Spatiales (CNES)
The DUACS system (Data Unification and Altimeter Combination System) produces, as part of the CNES/SALP project, Copernicus Marine Service and Copernicus Climate Change Service, high quality multi-mission altimetry Sea Level products for oceanographic applications, climate forecasting centers, geophysics and biology communities. These products consist in directly usable and easy to manipulate Level-3 (L3; along-track cross-calibrated SSHA) and Level-4 products (L4; multiple sensors merged as maps or time series). Level-3 algorithms used for nadir altimeters have been extended to handle SWOT’s unique swath-altimeter data: upgrades with state-of-the-art Level-2 corrections and models from the research community, a data-driven and statistical approach to the removal of spurious and suspicious pixels, a multi-satellite calibration process that leverages the strengths of the pre-existing nadir altimeter constellation, a noise-mitigation algorithm based on a convolutional neural network. The objective of this presentation is to present the uniqueness of Level-3 algorithms and datasets and the regular changes made twice a year with reprocessings. The changes introduced by version 2 of the L3 products published in December 2024/January 2025 are as follows: - Geophysical standards changes o Mean Sea Surface model 2024 o Internal tides model HRET14 o Quick fix of the SSB/SSHA offset in polar transitions o Addition of 5 cm offset on MDT and ADT to be consistent with other L3 products - Coverage improved : o Eclipse data gaps retrieved with good quality o Polar and coastal regions - Cross-calibration improved, especially for coastal areas and polar seas - Coastline and distance to coast improved - Addition of surface classification (ice/leads) in editing flag - Addition of new variables : o Unfiltered geostrophic velocities o Internal tide model o Cross-track distance 2D topography images from SWOT have been added to nadir altimeter data inside mapping algorithms (MIOST, 4DvarNET, 4DvarQG) to produce Level-4 products. The wide swath data provided by the SWOT mission help to reduce mapping errors mainly in energetic ocean currents, to better position oceanic structures (eddies, fronts…) and to have finer resolution in maps.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SWOT's contribution to the study of coastal ocean circulation, and more specifically the North Current (NW Mediterranean Sea)

Authors: Léna Tolu, Florence Birol, Claude Estournel, Fabien Léger, Mathilde Cancet, Rosemary Morrow
Affiliations: CNRS-LEGOS, Université de Toulouse
The monitoring of ocean currents is a key component in many coastal applications, ranging from biogeochemical resources to marine pollution or search and rescue. During the last three decades, satellite altimetry has played an essential role in the understanding and monitoring of ocean currents at global scale. But its use is still limited in coastal areas due to poorer data quality as we approach the coast, and a spatio-temporal data resolution considered as coarse relatively to the scales of coastal dynamical features. However, many recent studies addressing the different issues related to the derivation and exploitation of altimeter-derived coastal current velocities have shown that they efficiently complement coastal velocity fields derived from in-situ data (e.g., hydrographic observations, surface drifter and moored or ship-based acoustic Doppler velocities) or from shore‐based HF radars. Indeed, one of the major advantages of this measurement technique is to provide long time series (i.e. > 30 years) of spatially and temporally homogeneous information about the circulation and to be available at near-global scale. The coastal altimetry data quality problem can be partially overcome thanks to dedicated processing with adequate corrections. Additionally, merging data from multiple missions has been shown to improve the spatial and temporal resolution. However, few data sets including coastal processing and several altimetry missions exist. The SWOT mission represents the beginning of a new class of altimeters. Associated with substantial improvements in terms of spatial resolution (including in 2D while all other altimetry missions provide 1D information) and data accuracy, it could considerably change the situation in terms of coastal applications. In this study, we study and quantify the ability of SWOT to observe coastal currents compared with conventional nadir missions on a case study: the Northern Current (NW Mediterranean Sea). In particular, we take advantage of the 1-day repeat orbit during the Fast Sampling Phase as a prototype to explore what such temporal resolution can bring to coastal oceanography.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An enhanced Mean Sea Surface model developed by combining SWOT KaRIn and nadir altimetry data

Authors: Rémy Charayron, Philippe Schaeffer, Maxime Ballarotta, Antoine Delepoulle, Alice Laloue, Marie-Isabelle Pujol, Gerald Dibarboure
Affiliations: Collecte Localisation Satellite, Centre National d'Etudes Spatiales
The data from the Ka-band Radar Interferometer (KaRIn) instrument on the Surface Water and Ocean Topography (SWOT) mission is expected to mark a significant breakthrough in our understanding of the oceans. SWOT KaRIn offers two key advantages. First, it provides observations with unprecedented precision, enabling better resolution of small-scale ocean features. Second, it delivers two-dimensional observations, allowing ocean features to be viewed in their entirety, unlike traditional one-dimensional nadir altimeter data, which only offer cross-sectional views. In particular, SWOT KaRIn data is expected to help in the development of an enhanced Mean Sea Surface (MSS) model, which is essential for improving the precision of Sea Level Anomaly (SLA) measurements. This study introduces a novel MSS model derived from the integration of SWOT KaRIn data and 30 years of nadir altimetry observations. By leveraging the unparalleled spatial resolution of SWOT with the long-term temporal coverage of nadir altimetry, the new MSS model aims to provide a more detailed and comprehensive representation of mean sea surface topography. The process uses a gridded draft MSS to capture large-scale content, refining it with two kinds of innovations applied selectively based on wavelength. The first approach takes advantage of the SWOT KaRIn science phase mean profile, while the second relies on the static component of the Sea Surface Height (SSH) signal obtained through the Multiscale Inversion of Ocean Surface Topography (MIOST) mapping method. Qualitatively, compared to the state-of-the-art MSS Hybrid 2023, the new MSS reveals previously undetected seamounts and significantly reduces geodetic residuals in SWOT KaRIn science SLA signals. Quantitatively, on SWOT KaRIn science data, the new MSS reduces the integrated low-mesoscale SLA power density spectrum by 8.44% and the integrated low-mesoscale MSS error power density spectrum by 66.67%, compared to the MSS Hybrid 2023. Additionally, it reduces local SLA variance by up to 30% over geodetic structures. These improvements have also been validated using independent data from nadir altimeters and on SWOT KaRIn’s calibration and validation phase data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Imaging and altimetric multi-mission synergy, including SWOT, Sentinel 6, Sentinel2, for reservoir monitoring: applications to the Grand lacs de Seine reservoirs (France)

Authors: Sabrine Amzil, Carlos Yanez, Thomas Ledauphin, Maxime Azzoni, Nicolas Picot, Emilie Mangold, Claire Pottier, Jérome Maxant, Herve Yesou
Affiliations: ICube Sertit, Centre National d’Etudes Spatiales
Reservoirs are key tools in the management of water resources. They provide a means of reducing the effects of inter-seasonal and inter-annual flow fluctuations, thereby facilitating water supply, flood control, power generation, recreation and other water uses. For more than 20 years, satellite radar altimetry has been an effective technique for monitoring variations in the elevation of continental surface waters, such as inland seas, lakes and reservoirs, rivers and, more recently, wetlands. This paper presents a case study on the capabilities of current imaging and altimetry satellites including the breakthrough mission SWOT and the recent Sentinel-6 to monitor water surfaces and heights and variations in water stocks. It also highlights the contribution of multi-sensor synergy. The demonstration of this potential is ongoing over the largest French reservoir, the Der lake in Champagne (NE France). With 48km2, the Lac du Der-Chantecoq, also known as the Marne reservoir, is the largest man-made lake in mainland France. Together with the Orient, Amance and Auzon lakes, it is part of the system of large Seine lakes intended to protect Paris from flooding. These reservoirs are fully controlled and information regarding water heights and remaining water volume in the reservoirs was provided by the Grand lacs de Seine authorities in charge of their management. Water is drawn from the Marne from November/December to June, filling the reservoir. From July to October, water is released to support the flow of the rivers. As a result, the water surface area changes considerably throughout the year, from about forty square kilometers during the high-water period to less than ten square kilometers during the very low-water period. It is interesting to note that Lac de Der comprises three sub-basins separated by dykes. The central basin, the largest in terms of retained volume (185 million m3), the northern basin (9 million m3) and the southern basin (7 million m3). The work presented focuses on the central basin, which describes the most interesting dynamics for this type of study. This reservoir is under the track of several altimetry satellites, in particular SWOT and Sentinel-6. In the case of the SWOT data, the reservoir is theoretically located in the No Acquisition diamond area, but the signal from this large water body is clearly visible in the SWOT L2_HR_PIXC products and the Lake SP product, and the SWOT NADIR track passes over the lake. The analysis of the heights measured by SWOT sensors was carried out both on the PIXC PGC0/PIC0 point cloud products (class 4) and on the Lake SP Prior vector products over a period of more than one year, from July 2023 to mid-September 2024. However, it should be noted that vector data is not systematically available, which means that there are gaps in the time series of LakeSP Prior products. Both types of products reproduce well the hydrological evolution of the central reservoir, with a decrease in water levels from September to December, then an increase in levels with water storage and a maximum reached during the summer. Between the summers of 2023 and 2024, the level variations are just over 8m. To begin with, a few outliers were removed from the time series. The SWOT water level values were then compared with in-situ data. A bias of about twenty centimeters was observed between the in-situ data and the SWOT data. The origin of this bias is not fully understood. In other words, an instrumental origin, such as extreme position on the track, or inaccurate levelling of the in-situ stations; further investigations are in progress. Nevertheless, the accuracy/quality of SWOT measurements is very good, with an RMSE of 0.09m and at one sigma 0.30m. Lac de Der has also been tracked by satellites from the Jason series, including the most recent, Sentinel-6, since March 2021, allowing its water levels to be derived at a ten-day frequency. Sentinel-6 data was first processed using FFSAR to achieve fine resolution of the radargram in the along-track direction, and then a retracker specially designed for this focused data was applied to estimate the water height. Comparison of the in-situ water level data and the Sentinel-6 heights show a constant 60 cm offset between the two data sets; once this bias is corrected, the consistency of the data is impressive, with a RMS of 0.02m (Median and One sigma at 0.03m) based on the analysis of over 14 months of data. It is interesting to compare these values with those obtained during previous work on the same Lac de Der site using Jason 3 data, for which the comparison with in-situ data presented an RMSE of 0.36 m. Furthermore, a more detailed analysis of the Sentinel 6 data over time has revealed a very specific feature during from previous years, when the curve had a relatively smooth convex shape. This saw-like appearance is not an instrumental anomaly; it corresponds to a very specific management in 2024, with a close alternation of water release/impoundment, which is crucial in the context of the Paris Olympic Games, for which the management of the Seine was an essential parameter. It was only thanks to the highest temporal revision of the Sentinel 6 mission, i.e. 10 days, that it was possible to show that water stocks will be managed differently in the summer of 2024. When comparing the height series derived from SWOT and Sentinel-6 , the finer granulometry of the Sentinel-6 revisit is immediately visible, thanks to a revisit every ten days. As a result, the effects of management during the summer of 2024 of the Der reservoir are only observable on the Sentinel-6 series, and these effects are smoothed out on the SWOT series. The second part of this work deals with the analysis of surface areas as observed from SWOT data. The first step is to validate the PICX classifications and compare the surfaces obtained from SWOT with those derived from the Sentinel-2 (10m) time series. The first results obtained indicate a slight over-elevation of water surfaces, based on SWOT data, mainly during the low-water period. There is confusion between open water and the muddy and wet edges of the lake. The next steps will focus on combining these surface and altimetric data to generate a surface/height hypsometric curve for the lake, and also to monitor variations in the volume of this reservoir. The values derived from the Sentinel-2/Sentinel-6 satellite solution will be then compared with the DREAL estimates. The results obtained illustrate the strong current capabilities of altimetric satellite data such as SWOT and Sentinel-6, combined or not with optical data from Sentinel-2, to access water surfaces, heights and volumes over relatively small reservoirs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: First Quality Data Assessment of SWOT Products Over the Gironde Estuary

Authors: Lucie Caubet, Florent Lyard, Nadia Ayoub, Robin Chevrier
Affiliations: LEGOS laboratory, CNRS/UPS/CNES/IRD, CLS
Estuaries form the last sections of rivers before reaching the ocean. Therefore, these environments are influenced by both hydrological processes, mainly the river discharge and ocean processes, namely tides along with storm surges and waves. The understanding of their complex dynamics remains of prime importance as estuaries often combine special ecosystems to be protected and highly urbanised zones. Nevertheless, as physical estuarine processes occur on a wide range of temporal and spatial scales, their study requires a large number of measurements to improve our ability to capture most of them. So far, in-situ data, field campaigns and numerical modelling approaches were mainly used to achieve this. Very few estuarine studies were conducted through satellite data analysis, either because of a lack of temporal resolution or because of a bad quality of altimetry satellite observations in coastal regions. However, the new satellite mission SWOT (Surface Water and Ocean Topography) launched in December 2022 that carries an innovative altimetry radar combining SAR (Synthetic Aperture Radar) and interferometry techniques, is the first altimetry mission to answer both dedicated hydrology and oceanography purposes. As a matter of fact, SWOT provides 2D topography measurements into two 50 km swaths on both sides of the nadir track instead of only 1 dimensional sampling along the track like it is the case with conventional altimetric missions. Therefore, by providing 2D snapshots with high resolution, SWOT represents an incredible opportunity to study both longitudinal and transverse variabilities of water level in estuaries. It also represents a unique dataset to test and calibrate numerical models. But, as a first step, it remains of prime importance to verify the quality and the relevance of the existing products for the oceans (LR products) and for continental waters (HR products), especially as none of these products were specifically designed for estuaries. We focus on the Gironde estuary for which different types of datasets (i.e tide gauges and numerical model) are available and facilitate this validation process. We chose to start with Level 2 LR-unsmoothed ocean products at 250 m postings as ocean products are easier to handle compared to HR products and as the spatial resolution remains acceptable in lower reaches of estuaries compared to 2km resolution ocean products. The objective of our work is to evaluate the usability of this product in the Gironde estuary, to assess the accuracy of the sea level data and some of the corrections and to propose good practices on the use of these products. Our method relies on tide gauge observations as well as on numerical simulations with the 2D data-assimilated T-UGOm model. For this purpose, it is important to provide this first SWOT error budget for all SWOT pixels that are relevant for further physical analysis (i.e. water pixels). This supposed to first edit spurious SWOT data (i.e. both land pixels and pixels contaminated by land). Using SWOT sea surface height quality flag proved to be too restrictive to achieve this. Ancillary surface classification flag is neither appropriate enough in estuaries because it corresponds to a static land/water mask whereas the latter changes according to the phase of the tides. We thus propose two methods to compute a dynamic land/water mask specific to each cycle and pass. One is based on a thresholding of the backscatter radar response (i.e. sigma0) while the other relies on SWOT grid distortion anomaly. In addition to editing the data, these dynamic masks demonstrate some potential for identifying intertidal zones. Secondly, by comparing the resulting edited SWOT data to tide gauges, we found that SWOT deviation from in-situ measurements of total water levels (i.e. sea surface height including tides, waves and dynamic atmosphere effect) is of the order of ten cm all along the estuary. It should be noted that SWOT cross calibration have to be applied to avoid deviation above one meter. It is consistent when comparing with T-UGOm model for which SWOT deviation is of the same order over the whole swath. Moreover, the comparison with the model enables spatial quantification of SWOT disagreements and we finally detect some potential errors in the model that could not have been revealed in tide gauge comparisons alone.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SWOT Lake Processing and Products

Authors: Claire Pottier, Dr Mathilde De Fleury, Manon Delhoume, Dr Jean-François Crétaux, Dr Roger Fjørtoft, Dr Damien Desroches, Lucie Labat-Allée
Affiliations: CNES, CS Group
The SWOT altimetry mission [1, 2] was launched in December 2022, and since end of March 2023 provides a product specific to lakes [3], globally and repeatedly, with two or more observations per 21-day orbit cycle. It is computed from the pixel cloud [4], which provides height, corrections and uncertainties for pixels classified as water and pixels in a buffer zone around these water bodies, as well as in systematically included areas (defined by an a priori water mask), and for each water feature observed by SWOT and not assigned to a regular river. The lake product consists of polygon shapefiles, delineating the lake boundary and providing the area and average height of each observed lake. A Prior Lake Database (PLD) [5] allows to link the SWOT observations to known lakes and monitor them over time. The first two steps of the lake processing are crucial in the shaping of the lake object [6]. The first one is an accurate selection of pixels from the pixel cloud. This includes the removal of pixels related to rivers: these are mainly defined in the Prior River Database [7], but some remain outside and are currently handled in the lake processing. This selection also highly depends on the quality flags of the pixels. As an example, “specular ringing” pixels, i.e. pixels for which the interferogram quality is degraded due to range point-target-response side-lobe ringing from a bright target near nadir, were introducing a high error in water surface elevation as well as area. They are now discarded if they are outside a thresholded prior water probability mask, but kept otherwise, to better reflect the actual extent of the lake feature. The second step is to identify all separate water regions in the water mask previously obtained. After a simple separation of each water region, an additional segmentation based on height is performed to handle lakes that are layovered in radar geometry. To separate such mixed lakes with different heights, the Otsu method [8] is used to perform automatic height histogram thresholding. This segmentation is not straightforward because the height of some pixels must be discarded due to their classification (dark water or low-coherence pixels) or quality flags, but they are part of the observation of the lake and therefore must be kept to compute the lake extent. Moreover, the lake processing now uses the Lake-TopoCat dataset [9] to improve the assignment of detected water objects to PLD lakes: this dataset provides, for each prior lake, polygons that take into account hydrological constraints and topography while the previous dataset was based on distances between lakes. A first global performance assessment was presented at the SWOT Science Validation Meeting in June 2024, achieving accuracies meeting or being close to the Science Requirements [10]. Since then, the algorithms and auxiliary data used for operational processing have been further improved. We here present these evolutions, and their impact on performance, focusing mainly on the estimation of lake water surface elevation. Some options for future improvements are also addressed. References: [1] L.-L. Fu, D. Alsdorf, R. Morrow, E. Rodriguez, and N. Mognard, “SWOT: The surface water and ocean topography mission: Wide-swath altimetric elevation on Earth,” Jet Propulsion Laboratory, Nat. Aeronautics Space Administ., Washington, D.C., USA, JPL Publication 12-05, 2012. [2] M. Durand, L. Fu, D. P. Lettenmaier, D. E. Alsdorf, E. Rodriguez, and D. Esteban-Fernandez, “The surface water and ocean topography mission: Observing terrestrial surface water and oceanic submesoscale eddies,” Proc. IEEE, vol. 98, no. 5, pp. 766–779, May 2010. [3] Centre National d’Etudes Spatiales, “SWOT Level 2 KaRIn high rate lake single pass vector science data product (L2_HR_LakeSP)”, SWOT-TN-CDM-0674-CNES, Toulouse, France, 2024. [4] Jet Propulsion Laboratory, "SWOT Level 2 KaRIn high rate water mask pixel cloud product (L2_HR_PIXC)," JPL D-56411, Pasadena, CA, 2024. [5] J. Wang, C. Pottier, C. Cazals, M. Battude, Y. Sheng, C. Song, Md S. Sikder, X. Yang, L. Ke, M. Gosset, R. Reis, A. Oliveira, M. Grippa, F. Girard, G. Allen, S. Biancamaria, L. Smith, J.-F. Crétaux, T. Pavelsky, “The Surface Water and Ocean Topography Mission (SWOT) Prior Lake Database (PLD): Lake mask and operational auxiliaries”, Water Resources Research, In review. [6] Centre National d’Etudes Spatiales, "Algorithm Theoretical Basis Document: L2_HR_LakeSP Level 2 Processing," SWOT-NT-CDM-1753-CNES, Toulouse, France, 2024. [7] E. H. Altenau, T. M. Pavelsky, M. T. Durand, X. Yang, R. P. d. M. Frasson and L. Bendezu, "The Surface Water and Ocean Topography (SWOT) mission River Database (SWORD): A global river network for satellite data products," Water Resources Research, vol. WRCS25408, 2021. [8] N. Otsu, "A threshold selection method from gray-level histograms," IEEE Trans. Sys. Man. Cyber., vol. 9, no. 1, p. 62–66, 1979. [9] S. Sikder, J. Wang, G. H. Allen, Y. Sheng, D. Yamazaki, C. Song, M. Ding, J.-F. Crétaux and T. M. Pavelsky, "Lake-TopoCat: A global lake drainage topology and catchment," Earth System Science Data Discussion, 2023. [Online]. Available: https://essd.copernicus.org/preprints/essd-2022-433/essd-2022-433.pdf. [10] Jet Propulsion Laboratory, “Surface water and ocean topography mission (SWOT): Science requirements document,” JPL D-61923, Rev. B, SWOT NASA/JPL Project, Pasadena, CA, 2018.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Toward Comprehensive Understanding of Air-Sea Interactions Under Tropical Cyclones: On the Importance of High Resolution 2D Sea Surface Height measurements

Authors: Bertrand Chapron, Clément Combot, Alexis Mouche, Dr. Nicolas Reul
Affiliations: Hokkaido University, Ifremer
Cold wakes are marked distinctive footprints of the air-sea interactions occurring during the passages of moving Tropical Cyclones (TCs), with intense near-inertial waves dispersing through the ocean column. Strong shear currents at the base of the mixed layer can reach 1–3 m, penetrate deeply into the thermocline to erode the initial stratification, leaving quite systematically persisting sea surface anomalies: cooling, chlorophyll bloom and salinity rise. At depth, isopycn displacements leave thermocline ridges that strengthen the injection of subsurface anomalies, leading to measurable sea level depressions. Both barotropic (column-integrated current) and baroclinic modes participate to sea surface height anomalies (SSHA), but the latter is largely dominant for open ocean conditions. While sea surface temperature anomalies (SSTA) have extensively been documented, SSHA remain somehow overlooked. In the wakes of TCs, baroclinic signatures mostly range around 10–20 cm and peak at 40 cm. Deeper anomalies correspond to barotropic response. These measurable signatures are directly linked to the inner core TC dynamic and the ocean stratification. Moreover, TC SSHAs are persistent enough to be easily monitored by the current fleet of altimeter instruments, and largely augmented with the recent SWOT swath altimetric enhanced capabilities. Indeed, SWOT instrument can provide unique 2D maps of TC wakes. This eases the analysis and the automation of the SSHA extraction method, to make more precise TC wake SSHA monitoring capabilities. Importantly, the measured SSHA dynamics integrates and reduces the air/sea interactions during the TC passage into a single observable metric. SSHA mostly encodes the cyclonic wind forcing and the interior ocean state to provide new means to better analyze and understand air-sea interactions under TCs. In this presentation, we shall insist on SWOT new capabilities, especially during its fast sampling phase, to uniquely provide high resolution spatio-temporal 2D SSHA imprints, generated in the aftermath of TCs on time scales of weeks. The 2D SWOT SSHA measurements more precisely evidence the SSHA depression in the center of the storm-wake, balanced by opposite SSHAs outside of the wake. Moreover, propagations of the SSHAs depressions can jointly be compared to SSTAs, and also sea surface salinity anomalies. All signatures are generally found to both propagate westward at a speed depending on the TC latitude, suggesting that the resulting TC disturbances will transport all upper ocean material properties (SST, SSS).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Variational method for reconstructing and separating Balanced Motions and Internal Tide from wide-swath Altimetric Sea Surface Height Observations

Authors: Valentin Bellemin-Laponnaz, Florian Le Guillou, Clément Ubelmann, Pr. Éric Blayo, Dr. Emmanuel Cosme
Affiliations: Institut des Géosciences de l'Environnement - UGA/CNRS/IRD/INRAE, Datlas, Laboratoire Jean Kuntzmann - UGA/CNRS/INRIA
Mapping Sea Surface Height (SSH) from satellite altimetry is crucial for numerous scientific and operational applications. At the fine scales observed by wide-swath altimeters, SSH variations are primarily driven by two types of dynamics: nearly geostrophic balanced motions and the wavy motion of the internal tide. These two processes influence ocean dynamics in different ways and their contributions to SSH variations must be separated for applications. While this separation is now standard practice with high-frequency outputs of numerical simulations, it remains an unresolved challenge for SSH maps derived from satellite observations, which are sparse in both space and time. This study introduces an innovative method to separate balanced motions and internal tide components in SSH altimetric observations, including wide-swath altimetry. The method is based on a data assimilation system combining two models: a quasi-geostrophic model for the balanced motions and a linear shallow-water model for the internal tide. The inversion is performed using a weak-constraint four-dimensional variational (4DVar) approach, with two different sets of control parameters adapted to each regime. A major expected benefit of this approach is its potential to capture the non-stationary part of the internal tide component. The method produces hourly SSH and surface velocity fields for both components over a specified domain. The study focuses on the North Pacific Ocean, a region characterized by strong mesoscale and sub-mesoscale activity, including the two dynamics of interest. First, Observing System Simulation Experiments (OSSEs) were conducted over 20°×20° domains surrounding the SWOT crossovers. These experiments included both conventional nadir and wide-swath SSH measurements, interpolated from the LLC4320 MITgcm simulation. The performance of the mapping algorithm was evaluated by comparing its outputs with the MITgcm reference fields. In a subsequent step, the method is planned to be applied to real SWOT SSH measurements, with the Californian Current System 1-day phase crossover serving as a case study. As this work forms part of a PhD project, the most recent results will be presented at the conference.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Kilometer and Sub-kilometer Scale Precipitation Observations by the SWOT Ka-band Radar Interferometer: Detection and Precipitation Rate Retrieval Using Artificial Intelligence Approaches.

Authors: Bruno Picard, Colin Aurélien, Romain Husson, Gerald Dibarboure
Affiliations: Fluctus Sas, CLS, CNES
Satellite altimetry missions measure the sea surface height (SSH) at a global scale and with increasing accuracy since 1992. The attenuation of microwave radar pulse depends on the moist air refractivity index, strongly impacted by the precipitations. A new step forward is made with the launch at the end of 2022 of the Surface Water and Ocean Topography (SWOT) mission. Thanks to the technological breakthrough allowed by the Ka-band Radar Interferometer (KaRIn), the Ka-band backscattering coefficent is now available on a two-dimensional grid spanning 70 km on either side of the nadir track with two grid cell resolutions available (250 m and 2 km). As a matter of comparison, the swath of KaRIn is smaller than the Ka-band precipitation radar (KaPR) of the Global Precipitation Measurement (250 km), without its range slicing capability (250/500 m for the KaPR) but with a much greater spatial resolution (5 km for the KaPR). We will present the results of the impact of precipitation on SWOT missions. Defining a method to estimation the attenuation from the radar backscatter coefficient, we will quantify the threshold above which the SSH measurements are no longer valid and characterize the occurrences of non-valid observations, statistically and geographically. Then, we will present a first approach to retrieve precipitation rates from SWOT observations. As developed in a similar work achieved on Sentinel-1 (Colin et al 2024, submitted), a convolutional neural network for the regression of precipitation rate is trained on a large dataset of SWOT observations collocated to ground observations performed by the NEXRAD, the weather radar instruments operated in the USA. This model is trained in a multi-objective framework to mitigate the discrepancies between both types of sensors. The model is constrained to ensure that the mean and maximum precipitation rates both match the ground truth and contains a adversarial loss to provide implicit prior on the ground truth distribution. The precipitation rate is then used to find the threshold above which the observation of SSH is no longer valid. This presentation will highlight how the experience gain from the first year of the SWOT mission could benefit to future missions based on the same concept but also to all the scientific community focusing on precipitation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Desaliasing of tides and tidal currents using wide-swath altimetry

Authors: Perrine Abjean, Loren Carrere, Florent Lyard, Gerald Dibarboure
Affiliations: CLS, LEGOS, CNES
The accuracy of tidal models has been much improved during the last 25 years, but some tidal errors remain mainly in shelf seas and in polar regions where availability of new databases is still worthful for the development of future tide models. In this context and knowing that the tides and tidal currents are a predominant signal in shallow and shelf regions which have critical applications and societal interests, this study analyzes the interest of new satellite missions for the observation of tidal signals. The present analysis evaluates the potential for desaliasing tidal signals from various wide-swath satellites. We will consider the orbit of the Odysea mission for tidal currents, and the orbits of SWOT and the future S3NG for tidal elevations. The analysis is based on an OSSE experiment using the IBI36 regional simulation of the North-East Atlantic Ocean (provided by Mercator-Ocean) which allows taking into account the tidal signal as well as other oceanic variability which can prevent a proper tide estimation from satellite measurement due to crossed aliasing issues. The topography missions studied have different characteristics: SWOT is a single-satellite, non-sun-synchronous mission, and the future S3NG mission will be a 2 satellites constellation but with a sun-synchronous orbit. It is well known that sun-synchronous nadir missions do not sample properly the tidal signal: they are characterized by some bad aliasing frequencies for most tidal waves, and some solar waves are even not observable with these orbits (such as S1 and S2 waves). The local multiple sampling allowed by the wide-swaths of those missions and, in the case of S3NG, within a two-satellites constellation context, makes it possible to break these aliasing issues and allows a more accurate observation of tides. Regarding the Odysea mission, the aim of the study is to quantify the proportion of tidal currents that will be observed, taking into account the multiple local sampling allowed by the wide-swath. The final objective is to assess the interest of Odysea's tidal current measurements for the tidal community and eventually envision any assimilation of these data in ocean tidal models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Calibration of the SWOT systematic errors: current performances and limitations.

Authors: Matthias Raynal, Benjamin Flamant, Pierre Prandi, Etienne Jussieu, Emeline Cadier, Clément Ubelmann, Gerald Dibarboure
Affiliations: CNES, CLS, DATLAS
The launch of the Surface Water and Topography Mission (SWOT) in December 2022 represented a major breakthrough in satellite altimetry. Its Ka-band Radar Interferometer (KaRIn) provides for the first time 2-dimensional images with an unprecedented resolution and precision. SWOT will provide the first global view of freshwater bodies to monitor water resources and better characterize the water cycle. Over Ocean it completes the existing measures from nadir constellation to observe small scales topography structures (up to 15 km), which are important contributors in air-sea interactions and ocean vertical mixing. However, in comparison with nadir altimetry, the KaRIn measurements also contains new sources of errors errors which contaminates the topography measures for spatial wavelengths above a thousand of kilometers. They are referred to as the systematic errors. For oceanographers interested in SWOT measurements, they are of secondary importance as they do not impact the small scales topography measured. However, for hydrology applications they significantly contaminate the measures of water bodies height and slope. In the SWOT ground processing center, this correction is ensured by the crossover calibration (XCAL) algorithm. It relies on the analysis of uncalibrated Sea Surface Height (SSH) differences at KaRIn crossovers plus a comparison with SWOT nadir measurement. The objective, here, is to present the results of the Level-2 XCAL calibration assessment over both Land & Ocean surfaces and to characterize the residual errors after calibration. To achieve this, several studies have been conducted with for example the comparison of KaRIn topography measured over Land with respect to Digital Elevation Model (DEM) and the definition of a virtual continent over the Pacific Ocean. The results obtained are compared with the mission requirement defined for hydrology and discussed to highlight they complex variability driven among others by orbital parameters (such as the beta angle), the quality of KaRIn topography measurements over Ocean (and thus, the performances of geophysical models and corrections used to calculate it). We finally discussed how this calibration method can be improved in a near future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A new chapter in satellite altimetry: monitoring small lakes and coastal zones with SWOT HR PIXC data

Authors: Simon Jakob Köhn, Karina Nielsen
Affiliations: DTU Space
The Surface Water and Ocean Topography (SWOT) mission is the first satellite mission to provide 2D spatially distributed elevation measurements with a 21-day orbit. At higher latitudes like Denmark, even the smallest lakes are typically covered by 3.5 different passes within one orbital cycle, resulting in revisit times of ca. 9 days. Small lakes play a vital role in the global freshwater cycle and are thus important to understand, given the rapidly accelerating climate change affecting worldwide freshwater dynamics and increased demand for freshwater from population strain. Using SWOT’s unprecedented abundance of water surface elevation (WSE) measurements, we validate a method of deriving high-accuracy, robust WSE time series from the SWOT HR PIXC 2.0 data on 40 gauged Danish lakes between 0.25 and 40km2 surface area. Furthermore, we explore the minimum required lake size for SWOT to correctly measure the WSE. With sufficient aggregation of individual point measurements, we find that spatial undulations do not deteriorate the WSE time series accuracy with respect to gauges. We quantify the WSE time-series performance using the summary measures root mean square error (RMSE) and Pearson correlation coefficient (PCC). We find a median RMSE of 5.76cm and PCC of 0.93 using all 40 lakes and 13 months of SWOT data. Besides the enormous potential to observe even the smallest of lakes, SWOT is the first satellite to bridge the gap between inland water and the ocean effectively. Its 2D elevation measurements show spatial WSE variations in complex coastal areas that can be used to constrain hydrological models and boost flood prevention measures. Limfjorden is a large fjord/estuary in Denmark, stretching 180 km with a tidal signal of up to 30 cm. It features multiple side arms and encompasses the island Mors. Wind patterns and local land boundaries, such as chokepoints, largely influence the spatial WSE characteristics of Limfjorden. SWOT, for the first time, enables us to observe them. Significant water build-up is especially observable at chokepoints and interfaces with the open ocean under appropriate conditions. WSE levels along two pseudo centerlines (north and south of Mors) are validated using gauge data corresponding to each respective SWOT acquisition time. Our analysis shows that water levels remain consistent and converge to the same value whether traveling north or south of Mors. Particularly interesting are three SWOT acquisitions before and after the October 2023 Baltic Sea storm, allowing us to investigate its build-up and aftermath. Furthermore, we investigate how SWOT can observe the WSE in Øresund, particularly in and around Copenhagen harbor. We can spatially observe a tide surge passing through Øresund with a WSE gradient of ca. 1.5m. In Copenhagen, we observe an abrupt drop of WSE instead of a gradual decline we can see around the island of Amager, shielding Copenhagen. The abrupt drop is caused by a tide lock installed in the southern part of Copenhagen. Harbors in Denmark – and worldwide – are exposed to an ever-increasing flood risk, necessitating mitigation work and better flood and tide modeling. Our example proves that SWOT can observe small, complex areas with high spatial fidelity, such as harbors. This new data can constrain flood models and facilitate governments and local communities to enact better mitigation measures.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Is Ultrawide-Swath Precise 2D Altimetry Possible using Multiple GNSS-R Satellites in Flight Formation?

Authors: Estel Cardellach, Yiqing Wan, William Hill, Nicolo Bernardini, Martin Unwin, Camille Pirat
Affiliations: Institute of Space Sciences (ICE-CSIC, IEEC), Surrey Satellite Technology Ltd. (SSTL), European Space Agency (ESA)
GNSS reflectometry (GNSS-R) is an opportunistic technique that benefits from the signals transmitted by navigation satellites (GNSS constellations) at L-band (~0.2 m wavelength) to remotely sense different variables of the Earth surface in a cost-effective way (e.g., only the receiving chain is deployed) from small satellites (demonstrated from platforms as small as 3-unit CubeSats). One of the strong points of this remote sensing approach is its multi-static capability: from a single receiver one can potentially collect information from as many reflection points as GNSS satellites in view, covering different regions simultaneously. However, being opportunistic signals, not designed for remote sensing of the Earth, some of their features are suboptimal (e.g., available power and bandwidth). The forward-scattering geometry of the observations induces what is called the delay-Doppler ambiguity, that is, the impossibility to map the received power (at a given delay and Doppler frequencies) to a unique point/zone on the surface. In order to investigate whether a small swarm of six GNSS-R satellites could break these ambiguities and thus provide geo-located 2D information, the HydroSwarm mission concept was proposed to ESA in response to the OSIP ‘The Preparation Campaign on CubeSat swarm mission concepts’ Call in February 2023. A short contract (ESA CN 4000142425/23/NL/AS/ov, Oct 2023 - Mar 2024) initiated early studies, including the identification of feasible formation flight orbit configurations and signal processing approaches, the optimal performance of which was tested with the impulse-response approach. GLITTER is a Horizon Europe MSCA Doctoral Network project which kicked off on March 2024 and it will continue studying the potential of swarms of GNSS-R satellites over four years. Should this concept prove feasible and show sufficient performance, it would be very attractive due to its unprecedented broad swath capabilities. If each of the transmitter-surface-swarm links could generate 2D information across ~100 km wide-swath area, the joint swath considering all the simultaneously visible GNSS satellites would cover of the order of ~1000 km swath, with few gaps and some overlapping regions (good for redundancy). The HydroSwarm study identified a technique to break the ambiguity to generate 2D information and also proposed a precise-altimetry processing approach. Simple simulation scenarios confirmed that this altimetric technique could optimally yield cm-level altimetric resolution at 1 km spatial resolution, over wide-swath zones with surface height variations (anomaly) of the order of 10 cm. The studies are now continuing under the GLITTER project, to add elements not considered during the initial simulations, such as instrumental (clock synchronization, antenna phase centre and phase patterns) and other scattering related effects (coherence issues) that will certainly degrade the optimal performance. The technique will be presented, showing the results from its initial simple simulations and the progress towards adding more realistic effects, while identifying the most critical aspects to be considered.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Examining ice breakup on Arctic rivers using SWOT’s high-resolution altimetry

Authors: Linda Christoffersen, Prof. Louise Sandberg Sørensen, Prof. Peter Bauer-Gottwein, Dr. Karina Nielsen
Affiliations: Technical University Of Denmark, DTU Space, University of Copenhagen
The primary goal of this research is to investigate the capabilities of the Surface Water and Ocean Topography (SWOT) mission in observing Arctic rivers during the critical ice breakup and melt season, combined with optical satellite imagery. Arctic rivers, typically ice-covered for much of the year, undergo a rapid and complex breakup process in spring. This process significantly alters the water surface elevation (WSE) and flow patterns, influencing hydrological and cryospheric processes in these regions. This research aims to provide a deeper understanding of SWOT's capabilities to monitor ice breakup and advance our ability to monitor and predict these changes. Using SWOT’s high-resolution interferometric synthetic aperture radar (InSAR) data, this study explores the potential of SWOT to measure ice and water surface elevations in Arctic rivers. SWOT's spatial resolution of approximately 10 meters allows for precise measurements of ice and water surface elevation, enabling the monitoring of ice dynamics during the breakup season. The level of detail that SWOT can provide is not achievable with conventional satellite radar altimetry. In this research, we investigate how the breakup of ice within a single channel of a braided river system affects the flow dynamics and water levels in neighbouring channels. By analyzing both ice and water surface elevations over time, we track the evolution of water WSE during the ice breakup period. SWOT’s high-resolution data enables us to observe the complex changes in ice cover - from fully intact to partially broken and ice-free conditions - and how these stages impact the flow dynamics within the river system. This work contributes to a better understanding of the hydrological and cryospheric processes at play during the ice breakup season. By tracking the changes in both ice and water elevations on the Lena River, this research enhances our ability to model the flow dynamics of Arctic rivers in a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Global assessment of SWOT performance at the small scale via synergy with surface chlorophyll observations

Authors: Aurélien Deniau, Francesco Nencioli, Pierre Prandi, Dr Maxime Ballarotta, Matthias Raynal
Affiliations: CLS, CNES
Altimetry entered a new era with the launch of Surface Water and Topography Mission (SWOT) in December 2022. With a wide swath of 120 km, the Ka-band Radar Interferometer (KaRIn) provides, for the first time, 2D observations of sea surface elevation with a resolution of the order of the km scale. This allows for the observation of small-scale ocean processes (< 100km) currently not resolved by Nadir altimeters products, either single-satellite along-track or multi-satellite gridded fields, such as the ones generated by the DUACS production system. Currently, one of the main challenges with SWOT observations is to assess the nature of the small-scale features detected by KaRIn and to prove the benefits of two-dimensional measurements compared to standard along-track ones. Specifically, it still remains unclear how much of the observed sea level signal at scales below 100 km can be effectively associated with surface ocean currents. To address this challenge, we combined SWOT L3 2 km sea level observations collected during the “Science” Phase (21-day repetitive orbit) with remote sensing observations of surface ocean tracers. The underlying hypothesis at the base of this approach is that the two fields are tightly related: assuming geostrophic balance, SSH is directly associated with surface velocities; in turn (to a first order), these currents regulate the geographical distribution of the surface tracers. The tracer included in this study is chlorophyll concentration, retrieved from the CMEMS GlobColour multi-satellite product. As this product has a resolution of the order of the km scale, chlorophyll concentration has the capability to resolve sea surface structures down to scales analogous to those observable by SWOT. Comparison between DUACS SSH and surface chlorophyll was also included in the analysis and used as a reference. Our analysis investigates the spatial distribution of the correlation coefficient between SSH (both SWOT and DUACS) and chlorophyll concentration over segments of 120 km along the SWOT swath. As the initial results revealed that the distribution of such correlation was primarily driven by the large-scale meridional gradient, altimetry and chlorophyll fields were band-pass filtered to retain only the scales between 100 and 15 km relevant for the study. Our results evidenced that the SSH/CHL correlations obtained for the unfiltered products have overall similar values and geographical patterns for both DUACS and SWOT products. The strongest negative correlations are found over the upwelling regions and western boundary currents, while strong positive ones occur in the subtropical bands of southern Indian and pacific oceans. The same comparison performed with the band-pass filtered fields shows strongly degraded performance for the DUACS product, while the SWOT one maintains larger correlations coefficients and similar geographical distribution. This is a clear indication that SWOT observations perform better than the DUACS product at capturing the small-scale circulation at scales between 100 and 15 Km over most of the global ocean. A notable exception is the equatorial band where the DUACS product shows stronger correlations. Detailed analysis of the SSH fields indicated that in this band the small scales observed by SWOT are predominantly ageostrophic signals (e.g. internal waves) which are not associated with surface currents and, consequently, do not affect the distribution of the surface CHL field.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SWOT KaRIN Level-3 Calibration Algorithm and Updates

Authors: Cécile Anadon, Antoine Delepoulle, Clément Ubelmann, Marie-Isabelle Pujol, Gérald Dibarboure
Affiliations: Collecte Localisation Satellites (CLS), Datlas, Centre National d'Etudes Spatiales (CNES)
Sea Surface Height Anomaly (SSHA) images provided by KaRIn can be biased or skewed by a few centimeters to tens of centimeters. The main source of these errors is an uncorrected satellite roll angle, which explains why KaRIn images are mainly composed of a linear variation based on cross-track distance. There are various other sources of errors such as interferometric phase biases or thermo-elastical distortions in the instrument baseline and antennas. To mitigate these topography distortions, a calibration mechanism is applied. Two variants of KaRIn calibration have been developped: the SWOT mono-mission or Level-2 algorithm, and the multi-mission or Level-3 algorithm. As the names imply, the former is used in the SWOT ground segment and L2 products, whereas the latter is specific to Level-3 processors. The L2 algorithm was primarily designed to meet Hydrology requirements, and it is considered optional over ocean since it is not necessary to meet SWOT’s ocean requirements from 15 to 1000 km. This algorithm is based on SWOT data only, because a ground segment cannot depend on external satellites. In contrast, the Level-3 algorithm was designed to leverage better algorithms and external satellites: not only Sentinel-6, the so-called climate reference altimeter, but also all other altimeters in operations (Sentinel-3A/3B, HY2B/C, SARAL, CRYOSAT-2). The L3 correction is generally more robust and stable than the Level-2 variant thanks to the thousands of daily multi-mission crossover segments provided by the constellation. Level-3 calibration algorithm is regularly updated to solve failure cases and to improve calibration quality. For the version 1 of Level-3 product published in July 2024, static and orbit corrections have been added into the calibration for geodesy applications. The updates of the version 2 published in December 2024/January 2025 are improvements of the editing applied before the calibration to calibrate only on valid pixel, improvements of coverage with eclipse data used for the calibration and reduction of the number of degrees of freedom to focus on systematic errors and not on geophysical error residuals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The spatial organization of Sargassum aggregations by ocean frontal dynamics : insights from SWOT data

Authors: Pierre-Etienne Brilouet, Julien Jouanno
Affiliations: CNRS - LEGOS, Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3)
Unprecedented massive landings of Sargassum floating algae have been observed since 2011 in large amounts off the coasts of the Lesser Antilles, Central America, Brazil and West Africa with tremendous negative environmental and socioeconomic impacts. Satellite remote sensing is essential for observing, understanding and forecasting the extent of Sargassum blooms in the Atlantic. Currently, Sargassum detection by remote sensing is mainly based on ocean color indexes which rely on the difference in optical properties between Sargassum and the surrounding waters. At the Tropical Atlantic basin scale, the observability of Sargassum is assessed with the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Ocean and Land Colour Instrument (OLCI) sensors with spatial resolution of 1 km and 300 m, respectively. However, the acquisition of those optical data is only available during the day and is deeply afected by clouds and their shadows which are prevalent over the region of interest. In this context, we propose an innovative approach based on Synthetic Aperture Radar (SAR) images to improve the detection of Sargassum rafts and better understand the structuration of the mats in the open ocean. Indeed, the SAR allows to probe high resolution observations of the ocean surface during day and night, regardless of weather conditions and cloud cover. The SAR backscatter signal captures the sea surface roughness signature ot the Sargassum mats, most probably because they inhibit the small waves at the ocean surface. The capacity of the SAR to detect Sargassum is verified using a multi-sensor approach, through comparison with ocean color based Sargassum detections. In this study, the emphasis is on the recent Surface Water and Ocean Topography (SWOT) mission. Indeed, SWOT mission is a breakthrough in radar remote sensing, as the onboard sensors provide on a same wide-swath, the sea surface height (SSH) and the backscatter signal at high resolution (250m). This provides an invaluable framework for investigating the spatial organization of Sargassum mats and the associated frontal ocean dynamics. Once our Sargassum identification algorithm has been validated, we have focused on selected case studies in order to improve basic knowledge of the processes driving Sargassum transport and aggregation, especially the relative contribution of oceanic frontal dynamics and winds in shaping the Sargassum mats at the ocean surface.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SWOT hydraulic visibility on a densely instrumented reach of the Rhine canal: accurate flow lines and wave propagation signature

Authors: Thomas Ledauphin, Pierre-André Garambois, Kevin Larnier, Charlotte Emery, Maxime Azzoni, Nicolas Picot, Sabrine Amzil, Jérome Maxant, Roger Fjortof, Herve Yesou
Affiliations: ICube Sertit, INRAE, HYdroMatters, CS Group, CNES
The Surface Water and Ocean Topography (SWOT) satellite mission), a joint endeavor between NASA and the French space agency CNES, is set to revolutionize our understanding of Earth’s water cycle by providing unprecedented data over continental water bodies and oceans. Launched in 2022, SWOT measures the elevation of water bodies with exceptional accuracy and resolution, offering a comprehensive view of surface water dynamics across the globe at scales never before achieved from space. SWOT data, mostly based on the fine PIXC, acquired during the CalVAl orbit, ie 101 days from March to July 2023, were exploited in order to analyze the feasible accuracy on hydraulic visibility on a densely instrumented river, and evaluate the possibility to observe /recognize/characterize hydraulic waves on a normal and extreme hydrological context. This analysis was achieved over 180 km of the Rhine River at the French German borders. This area is particularly interesting as two hydraulic objects are running in. The Western one, the canalized Rhine, presenting a succession of 10 hydropower dams, i.e. like a succession of gently sloping basins, is returning to a free river course after the Iffenzheim dam. On the Eastern side, the Old Rhine, a by-passed segment, is flowing in more natural conditions, beginning with a first free segment of 50 km. Then the Old Rhine corresponds of about ten km, very largely entropized segments, flowing at a lower altitude of about ten meters. The difference in gradient is recovered by a series of metric weirs. At the level of a hydroelectric dam, the North-South offset will be 12-14 meters, with a lateral East-West offset of 8-10 m. To analyze and validate SWOT derived parameters, ie Water height and slope values a database containing water elevation over 44 water level gauges collected from French and German agencies (VNF, DREAL, WSV, LUBW and EDF …) le over the 180km long river providing WSE time series at 15 mn time step - all leveled in EGM 2008 that is SWOT reference. Plus, several stand-alone stations plus additional gauges were also installed, as well as few limnimetric scales (OECS). Drone flights were also carried out to measure the water surface elevation and slopes. This very rich dataset provides a relatively rare and spatially dense reference measurement of WSE profile along the Rhine that is of great interest to analyze SWOT data. In addition, as the SWORD river database, ie V12 to V16, presented an inaccurate centerline, not respecting the true river morphology, a customized river database, with a more realistic description of the Rhine's parallel courses, and of the position of structures (dams, locks, weirs), was set up and used to reprocess the SWOT HR L2 River products, using the open source RiverObs tool. Then precise Water Surface Elevation (WSE) profiles of the river, are obtained at a daily temporal resolution over the Cal/Val period, greatly revealing the channelized Rhine profile with its successive dams, and compared with the numerous available in situ time series. Slope profiles are simply computed by downstream finite difference. Both PIXC and nodes WSE products are first illustrated in terms of longitudinal profiles. This shows, at a given date, a PIXC snapshot greatly depicting the chanelized Rhine WSE profile closely matching in situ WSE at the gauging stations well distributed over the various reaches. This figure also displays the hydraulic visibility in terms of temporal variation, over the 103 consecutive days of the Cal-Val period, of longitudinal WS elevation and slope profiles Z(x) and S(x). These profiles obtained from nodes products represent relatively accurate hydraulic information at high spatial resolution, nevertheless the slope profiles variabilities at high spatial frequency might be attributable to measurement noise since smoother variations are expected for the gradually varied open channel flows observed. Remarkably, SWOT provides a quite fine visibility of temporal variations of WSE at each station. Local WSE measurements of the node product are relatively accurate with a median on WSE error of -0.03m, a standard deviation of 0.42m, and a 1Sigma i.e. 68th percentile of absolute errors of 0.1m. This is a remarkable accuracy for node data obtained by spatial averaging of the PIXC over a relatively small spatial polygon with RiverObs algorithm while SWOT science requirement on WSE error is of 10cm on WSE for spatially averaged data over 1km2. For node scale product, i.e. resulting from a spatial aggregation of PIXC, with a median on WSE error of -0.07m, a standard deviation of 0.24m, and a 1Sigma of 0.12m for the nodes. For the average WSE at reach scale EXPLAIN, the errors are even lower with a median on WSE error of -0.06m, a standard deviation of 0.43m, and a 1Sigma i.e. 68th percentile of absolute errors of 0.10m to identify hydraulic phenomena with relatively good confidence. Remarkably, two hydraulic propagation phenomena are visible with SWOT snapshots over the studied period and this is corroborated by the close fit with the available in situ elevation data within the reach. of interest. First, the signature of a flood hydrograph propagation from upstream to downstream is clearly visible in the WS profiles, with a local intumescence characterized by higher slopes locally. Second, oscillations in the WSE profile must be due to wave propagation from downstream to upstream following operations on the dam downstream of the reach. This is corroborated by downstream water levels temporal variations and also by engineers in charge of hydraulics structures over the Rhine. This highlights the remarkable capability of SWOT to depict fine signatures associated to hydraulic propagations. This impressive hydraulic visibility of WS deformations with SWOT also enable to identify surface signatures triggered by natural hydraulic controls as illustrated with the ”Old Rhine” for which WSE profile in low flow conditions shows meaningful spatial variabilities with main slope breaks clearly corresponding to the signatures of main hydraulic controls that can be associated to morphological variabilities of the main channel (riffles, contractions, sandbars and ledges) while flatter WS zones correspond to pools. These results obtained over the Rhine Tier 1 Cal Val site confirm the quality SWOT measurements in terms of absolute water level. We also observe that SWOT signal allows to identify changes in the river profile over time, as a function of flow and river level (Slope, flow1 waves, ...) but also series of small riffles, about one meter high, and successive pools and alluvial deposits, sandbanks, and recharge areas. The consolidation of the results already started using data from the Science phase over this area and narrow rivers (between 30 and 60 meters such Meurthe, Moselle, Ill rivers in France), that confirm the quality of SWOT measurements. Soon, we will continue the comparison with an HR lidar topo-bathymetric DEM and annual analysis of the SWOT profile to see if changes in the river surface profile can reflect changes in the riverbed topography. All these very promising results highlight the SWOT data for calibrating large-scale hydraulic models on rivers, in the near future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ocean tides at the interface of inland and coastal waters from wide-swath satellite altimetry

Authors: Michael Hart-Davis, Richard Ray, Daniel Scherer, Christian Schwatke, Tamlin Pavelsky, Denise Dettmering
Affiliations: DGFI-TUM, Geodesy and Geophysics Laboratory, NASA Goddard Space Flight Center, University of North Carolina
The land-sea interaction of water is a complex system that is crucial for a wide-range of biogeochemical phenomena, ranging from compound flooding to feeding patterns to pollution distributions. Ocean tides are a natural phenomenon which plays a major role in the dynamics of the water in the land-ocean continuum. Studying ocean tides from satellite altimetry has traditionally been difficult in coastal regions, mainly due to the complexity of tides in these regions, limited spatial coverage of in-situ and satellite observations, and land contamination of the satellite radar returns. Significant efforts have been made by modellers to better resolve the ocean tides closer to the coast by employing enhanced algorithms for coastal altimetry and leveraging more accurate bathymetry products. In late 2022, the Surface Water and Ocean Topography (SWOT) satellite launch introduced the Ka-band Radar Interferometer (KaRIn), marking a significant leap beyond traditional altimetry by offering high-resolution, two-dimensional sea surface measurements. This presentation demonstrates the use of these wide-swath data for tidal research at unprecedented spatial scales within complex coastal environments. The validation of the results from SWOT is encouraging as errors with respect to gauges are reduced compared to both global and regional models. The Cal/Val phase of SWOT has also provided the opportunity to evaluate the nonlinear tidal effects. The high-resolution products from SWOT are also useful for tidal research within the land-ocean continuum, particularly in estuaries, rivers and fjords. Exploiting the SWOT inland products, we derive empirical estimates of ocean tides in estuarine and river systems, allowing us to calculate the extent of tidal influence within inland waters, which is crucial for a variety of biogeochemical processes and compound flood predictions. These results demonstrate further opportunities to use wide-swath measurements to advance the understanding of tidal dynamics in these critical areas. Furthermore, the incorporation of current and future wide-swath data into tide models will be crucial to advance the tidal corrections in the coastal region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sentinel-3 Next Generation Topography Mission Performance and Uncertainty Assessment (S3NGT-MPUA)

Authors: Noemie Lalau, Thomas Vaujour, Michaël Ablain, Clément Ubelmann, Lucile Gaultier, Fabrice Collard, Nicolas Taburet, Julien Renou, Maxime Vayre, Emma Woolliams, Sajedeh Behnia, Frédéric Nouguier, François Boy, Louise Yu, Alejandro Egido, Craig Donlon, Robert Cullen
Affiliations: Magellium, DATLAS, ODL, CLS, NPL, Ifremer, CNES, ESA-ESTEC
The Sentinel-3 Next Generation Topography (S3NGT) mission is designed to ensure the continuity of the existing Copernicus Sentinel-3 nadir-altimeter measurements from 2030 to 2050 while also improving measurement capabilities and performance. This mission consists of two large spacecrafts equipped with an across-track interferometer swath altimeter (SAOOH), a synthetic aperture radar (SAR) nadir altimeter (Poseidon-5 (POS-5)), a multi-channel microwave radiometer, and a precise orbit determination suite. The SAOOH instrument builds upon the swath altimetry advancements pioneered by the Surface Water and Ocean Topography (SWOT) mission, launched in December 2022 with the KaRIn instrument. However, the SAOOH instrument differs from KaRIn in several key aspects, including a shorter baseline (3 m instead of 10 m) and a different Signal-to-Noise-Ratio (SNR) of the antenna compared to SWOT/KaRIn's. Given these differences, it is imperative to pay utmost attention to the performance of the S3NGT mission before its launch to ensure its success. The ESA-supported S3NGT preliminary mission performance and uncertainty assessment (S3NGT-MPUA) study is ongoing. In this study, we have made significant progress since its initiation in May 2023. Here, we outline the key achievements for each objective of this project. The first objective of the study is to conduct a preliminary assessment of the performance of S3NGT Level-2 products before the mission's launch. This assessment focuses on ocean surfaces and inland waters. To achieve this, we first developed a strategy to generate S3NGT-like data. For ocean surfaces, we employed a strategy that utilized inflight data from the SWOT mission (level-3 products) to create a first dataset and an Ocean General Circulation Model (OGCM) to simulate S3NGT-like data for a second dataset. We introduced S3NGT-specific instrumental uncertainties into the OGCM data and SWOT inflight level-3 data using a scientific simulator developed by ODL. These two complementary approaches provide respectively lower and upper bounds of S3NGT performances. Additionally, we evaluated the behavior of swath measurements for high sea state conditions by degrading SWOT Level-1B data with instrumental characteristics that are similar to SAOOH ones. For inland waters, we developed a similar strategy to generate S3NGT-like data from SWOT L2 Pixel cloud products, incorporating specific S3NGT uncertainties. This activity also included defining key metrics to describe the mission’s performance for several variables including sea surface height, sea state, systematic errors, and inland water surface elevation. We evaluated these metrics with our generated S3NGT-like data and compared these results to the S3NGT Mission Requirements Document (MRD). This approach provides valuable insights into the expected capabilities of S3NGT products and highlights areas where the mission design can be refined to meet operational requirements. The second study objective is to develop a comprehensive uncertainty model and budget following established metrological principles. The first part entails a metrological assessment of the S3NGT mission, while the second part focuses on verifying and validating the S3NGT uncertainty budget. This has involved creating a clear metrological traceability diagram representing the Swath altimeter and using this to identify (and later quantify) individual sources of uncertainty. The third study objective is to evaluate options for in-orbit calibration of the S3NG-TOPO mission. This includes methods such as using orbit crossovers to correct for known systematic errors in water elevation over the ocean and inland water bodies. The evaluation must consider the different latencies of the S3NG-TOPO products. Indeed, given the stricter latency requirements for the S3NG-TOPO mission than for SWOT, the number of available orbit crossovers for cross-calibration measurements is limited. The final study objective is to assess the uncertainty in cross-calibrating S3NG-TOPO with the current S3 constellation and reference mission (S6) using established or innovative methods. The primary goal is to ensure the continuity of S3NGT measurements, including an effective cross-calibration between the current S3 constellation and the future S3NG-TOPO constellation, particularly for the nadir altimeter system incorporating the microwave radiometer (MWR).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New Insights into Cryosphere Applications of the Surface Water and Ocean Topography (SWOT) Mission

Authors: Mohammed Dabboor
Affiliations: Science And Technology Branch, Environment And Climate Change Canada, Government of Canada
Monitoring sea ice is critical for advancing our understanding of climate change, maintaining polar ecosystems, and ensuring safe navigation in the Arctic and Antarctic regions. Satellite remote sensing technologies play a pivotal role in sea ice monitoring by providing consistent and reliable observations. These technologies offer detailed information on ice type, thickness, extent, and movement, enabling scientists to track changes over time and identify emerging trends. Such data are essential for improving the accuracy of climate models, supporting ecosystem studies, and enhancing the safety and efficiency of maritime operations in these challenging environments. By offering a comprehensive view of sea ice dynamics, satellite remote sensing significantly contributes to informed decision-making and policy development for polar and global sustainability. The NASA/CNES international Surface Water and Ocean Topography (SWOT) satellite mission, launched on December 16, 2022, was initially designed to support ocean and hydrology applications. Its primary sensor, a Ka-band near-nadir radar system, provides across-track interferometric measurements for two swaths on either side of the satellite's nadir. Beyond its primary objectives, the SWOT mission has demonstrated promising potential for applications in polar and Nordic environments. Its high-resolution data could enhance our understanding of ice dynamics, particularly in the Canadian Arctic. By integrating SWOT's data with existing satellite imagery, such as from conventional radar systems, it is possible to achieve a more comprehensive understanding of sea ice conditions. This integration could support strategic and informed decision-making for Arctic management, navigation, and ecosystem protection, addressing both regional and global challenges. This presentation offers a preliminary assessment of the feasibility of using data products from the SWOT satellite for sea ice analysis in the Arctic. We provide both qualitative and quantitative evaluations of SWOT measurements for ice detection, including an analysis of scattering profiles from the Ka-band radar across various ice types. Additionally, we incorporate an analysis of coincident imagery from the SWOT satellite and the RADARSAT Constellation Mission (RCM). This combined approach leverages SWOT's high-resolution data and RCM's advanced radar imaging capabilities to enhance the detection and characterization of sea ice. By integrating these complementary datasets, we aim to improve our understanding of ice dynamics, contributing to more robust monitoring and modeling of Arctic ice conditions. Preliminary results from coincident SWOT and RCM observations over the Beaufort Sea demonstrate the promising capability of SWOT for detecting multiyear ice floes and leads.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Development of an integrated method to validate SWATH altimetry over inland water: A new approach from SWOT Cal/Val first results

Authors: Valentin Fouqueau, Gabrielle Cognot, Eliot Lesnard-Evangelista, Jean-Christophe Poisson, Nicolas Picot, François Boy, Roger Fjortoft, Laurent Froideval, Christophe Connessa
Affiliations: Vortex-io, CNES, CNRS-INSU
For many years now, satellite altimetry has been increasingly used to monitor inland waters all over the globe. The SWOT mission represents a technological breakthrough compared to all previous altimetry missions especially for hydrology. Inland water elevation measurements are no longer taken at specific points that correspond to intersection between satellite ground track and rivers but across entire river segments. This technology paves the way for a significant increase in altimetric measurements for inland waters enabling unprecedent new applications of altimetry data. The SWOT mission has demonstrated the full potential of this measurement technology. This is evidenced by the decision to equip the next-generation Sentinel-3 mission of the Copernicus program with swath altimeters. The validation of swath altimetry data presents new challenges. Advances in satellite measurement technology necessitate a novel approach to in-situ data collection for swath altimetry validation. Traditionally, validating nadir altimetry required punctual in-situ measurements, which were directly compared to satellite data. More advanced processing methods, such as those developed in the St3TART project, have further refined nadir altimetry validation with the combination of fixed and moving sensors measurements. For swath altimetry, however, the process requires the acquisition of longitudinal water surface height (WSH) profiles that are temporally and spatially collocated with the satellite overpass. This can be achieved through field campaigns deploying moving sensors, as the vortex-io altimeter, to capture WSH along these profiles or other types of moving sensors (CalNaGeo, Cyclopée, airborne LiDAR, etc…). Such an approach has been successfully employed during the SWOT Cal/Val campaign. However, the main limitation of these methods is the logistical demand, as field teams must be mobilized for each swath altimeter pass to capture the necessary longitudinal profiles. Moreover, the acquisition time of such in-situ means are not comparable to Satellite measurements. In the framework of the SWOT Calibration and Validation phase, vorteX-io has developed a method to validate swath altimetry measurements over extended river segments. This approach relies on robust in-situ instrumentation using two sensors developed by the vorteX-io team. The core concept is to reconstruct the river's longitudinal profile over a long section for each swath altimeter overpass. This reconstruction combines simultaneous measurements from fixed micro-stations with data from moving sensors, using a historical database of longitudinal profiles. This database, created prior to the Cal/Val period, captures river topography across a wide range of water levels to ensure comprehensive coverage. In this presentation, we will detail the instrumentation required to construct the combined profiles, outline the method and discuss the results obtained from the existing super sites. The first case study focuses on Marmande, a Cal/Val super site used for both SWOT and Sentinel-3 Cal/Val. Additionally, we will present the goals of extending this approach to other rivers in various regions to further validate the swath altimetry validation methodology. This effort to develop a network of super sites is being carried out in collaboration with CNES as part of the S3-NG T project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring the Arctic Ocean with SWOT - A comparison with conventional altimeter measurements in the ice-covered ocean

Authors: Felix Müller, Denise Dettmering, Florian Seitz
Affiliations: Technical University of Munich (DGFI-TUM)
In the last decades, satellite altimetry has become a significant data source for monitoring the polar sea level and ocean currents. Since the early nineties, satellite altimetry has been used for monitoring the Arctic Ocean sea surface heights (SSH), the declining sea ice cover and changing ocean circulation with continuously improved observation techniques and increased accuracy. While classical pulse-limited altimeters were used at the beginning (e.g. from ERS-2 and Envisat), the era of Delay-Doppler altimeters began in 2010 with the ESA Earth Explorer Mission Cryosat-2 that opened new possibilities, e.g., in lead detection or free-board calculation. A game changer is the use of swath altimetry, which has been available since December 2022 from the radar Ka-band interferometer observations of SWOT also in the ice-covered oceans. With SWOT it is now possible to retrieve SSHs in a 2D-swath and not only along a 1D-pass. Thanks to its various data sets, it is possible to capture pixel-based height information with a spatial resolution of 250 metres in the Arctic peripheral seas up to a latitude of 78°N. Although SWOT does not cover the central Arctic Ocean, it nevertheless opens up new possibilities for the detection of leads (i.e., elongated water openings within the sea-ice) to get new insights into the ocean’s dynamic topography and further into the Arctic Ocean circulation. This contribution utilises SWOT Level-2 LR Unsmoothed and Expert ocean datasets to investigate the lead detection and sea surface height determination capabilities of SWOT. The work, embedded in the SWOT Science Team within the SMAPS project, aims to gain a first impression of the extent to which SWOT is suitable for polar ocean applications. Therefore, SWOT observations will be compared pointwise with conventional Ka-band altimetry (i.e. SARAL) and Delay-Doppler Ku-band altimetry from Cryosat-2 and Sentinel-3 (Sea-Ice Thematic Product), as well as with lidar altimetry from NASA's ICESat-2 in terms of computed sea level anomalies (SLA), sigma0 (i.e. backscatter) and water surface (i.e. leads, polynyas) detections. The first step is to find crossover locations between SWOT and the other missions. For this task, SWOT’s cal/val (1-day repeat) and science phase (21-days repeat) are considered. In order to have the same sea ice conditions during the overflights, a maximum acquisition time difference of 30 minutes is set. Despite these temporal boundary conditions, a sufficient number of suitable comparisons can be identified, for example, about 80 for ICESat-2 and about 600 for SARAL during the cal/phase. Only crossover locations with enough valid nadir observations will be kept to enable the creation of reliable statistics. In the next step, for all crossover locations, pointwise comparisons are performed. This includes signal and data analyses by means of RMSE or correlation computations between SWOT swath data and observed SLA from contemporaneous satellite altimetry missions (i.e., SARAL, Cryosat-2, Sentinel-3A/B, ICESat-2, and SWOT nadir). For this purpose, the same atmospheric and geophysical corrections are applied to the SWOT KaRIn observations before they are interpolated to the conventional altimeter observation locations. Moreover, analyses are carried out with regard to backscatter and results of the unsupervised nadir altimeter surface type classifications (Müller et al., 2017). It will be analysed to what extent the different missions detect and reproduce open water features such as leads of different sizes for example.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Performance of the Surface Water and Ocean Topography (SWOT) Mission for Monitoring Small Lakes in West Africa

Authors: Félix Girard, Laurent Kergoat, Ibrahim Mainassara, Maxime Wubda, Hedwige Nikièma, Amadou Abdourhamane Touré, Julien Renou, Maxime Vayre, Nicolas Taburet, Nicolas Picot, Manuela Grippa
Affiliations: Géosciences Environnement Toulouse (GET), Collecte Localisation Satellites (CLS), HydroSciences Montpellier (HSM), Université Joseph Ki-Zerbo, Université Abdou Moumouni, Centre National d’Etudes Spatiales (CNES)
In West Africa, the volume variability and hydrological functioning of the region’s thousands of lakes are poorly monitored. The recently launched Surface Water and Ocean Topography (SWOT) mission carrying a wide-swath Ka-band radar interferometer offers new opportunities for large-scale monitoring of lake water resources, and overcomes the spatial coverage limitations of the nadir altimeters. Here, we evaluate the performance of two SWOT data products, namely the PIXel Cloud (PIXC) and the Lake Single Pass (LakeSP), over sixteen small and medium-sized lakes in the Central Sahel. Excellent agreement of elevation with in-situ is found for both products, with 1-sigma errors (68th percentile of absolute errors) between 0.06 and 0.11 m, consistent with the mission science requirements. When compared to Sentinel-3 elevation, the PIXC product shows better results than LakeSP, with 1-sigma differences of 0.16 m and 0.32 m, respectively. SWOT LakeSP surface area estimates show a large variability and a general overestimation with a median bias of 17.2% compared to Sentinel-2 measurements. SWOT pixel classification errors related to bright land contamination or dark water due to low signal return are found to affect both elevation and area estimates, especially for LakeSP. Restrictive spatial filtering combined with the use of appropriate quality flags included in the SWOT PIXC product allows to mitigate the classification errors and produce robust water surface elevation estimates. Elevation-area relationships derived from combined SWOT PIXC and Sentinel-2 compare well with in-situ (RMSEs below 0.28 m), highlighting the capabilities of SWOT for monitoring lake volume changes once complemented by external water masks. Finally, the SWOT PIXC product is used to derive seasonal water level amplitude estimates for more than 600 lakes of various sizes, 80% of which are relatively small (< 1 km²). With 25% of the study lakes showing a water depletion greater than the average evaporation-induced water loss, such results provide unprecedented large-scale information on the lake water use and also highlight the spatial coverage potential of SWOT.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using SWOT Data to Assess the Impact of Ocean Tides and Sea Level Change on Upstream Rivers and Estuaries

Authors: Robert Steven Nerem, Toby Minear, Martin Kolster, Eduard Heijkoop
Affiliations: University Of Colorado
The upstream-most backwater effects of ocean tides on inland rivers and estuaries are not well known presently and pose an additional threat from sea level rise. As sea levels are projected to rise by as much as ~1 m by the year 2100, tides will propagate farther upstream, extending into freshwater waterbodies and wetland areas, adding to flooding threats far from the sea. In addition, even small increases in salinity can have serious impacts on groundwater, wetlands, agriculture and human populations living in these areas. With SWOT, we can, for the first time, observe the extent to which tidal-influence backwater effects influence upstream estuaries and rivers. There are many factors that impact this effect including the height of the high tide on a particular day, the slope of the geoid, potential storm surge, the shape of the estuary and its connectivity to nearby wetlands, in addition to other factors. We are investigating the use of SWOT data to map the present-day spatial extent of tidal-influence on upstream surface waters, as well as the spatially-distributed mean water surface elevation and tidal range (difference between low- and high-tides) from the coast through the estuary and river system. These three key variables can then be used to project the impacts of sea level rise for these inland surface water regions. We will show initial results from this study and discuss plans for future research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing SWOT satellite performance against tide gauge observations in the Western Mediterranean Sea

Authors: Diego Vega Gimenez, Antonio Sánchez Román, Laura Gómez Navarro, Angel Amores, Ananda Pascual
Affiliations: Imedea (uib-csic)
Over the past three decades, radar altimetry has revolutionized sea level monitoring globally and regionally. However, traditional altimeters face challenges near the coast, particularly within 20 km of the shoreline, where land contamination and complex geophysical conditions degrade data quality. This gap has limited the understanding of coastal sea level dynamics, crucial for assessing hazards, tracking climate-driven trends, and improving coastal resilience. The Western Mediterranean Sea, with its intricate bathymetry and energetic mesoscale features, provides a valuable testbed for evaluating advancements in altimetry. The Surface Water and Ocean Topography (SWOT) mission, developed through international collaboration, employs the innovative Ka-band Radar Interferometer (KaRIn) to produce high-resolution, two-dimensional sea surface height (SSH) maps. This breakthrough enables the resolution of coastal mesoscale and sub-mesoscale phenomena, offering new opportunities to monitor sea level variability close to shore. This study utilizes SWOT data from its intensive 90-day Calibration/Validation (Cal/Val) phase (April–July 2023) to validate Level-3 Sea Level Anomalies (SLA) against observations from 21 tide gauges distributed along the Western Mediterranean coasts. These tide gauge records, provided by the Copernicus Marine Service, were adjusted to remove atmospheric contributions from pressure and wind, isolating the sea level signal for direct comparison. Results reveal strong correlations and low Root Mean Square Differences (RMSD) between SWOT-derived SLAs and tide gauge data, demonstrating the mission’s capability to capture coastal sea level anomalies with high precision. SWOT’s swath-based observation method overcomes the spatial limitations of traditional nadir altimetry, extending valid measurements closer to shore and improving the resolution of small-scale coastal processes shaped by bathymetry, shoreline configuration, and regional atmospheric dynamics. This research highlights the significant advancements made possible by SWOT in coastal altimetry. The mission’s ability to provide high-resolution SSH measurements near the coast bridges critical observational gaps, offering valuable insights into coastal variability and mesoscale processes. By validating SWOT’s performance in the Western Mediterranean, this study underscores its pivotal role in advancing coastal monitoring and informing strategies for managing the impacts of sea level rise and climate change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Flood event analysis based on SWOT PICX products: case of May 2024 Sarre (Northeast France) and October 2024 Valencia Province (Spain) floods events

Authors: Sabrine Amzil, Jérome Maxant, Pierre Andre Garambois, Kevin Larnier, Alessandro Caretto, Maxime Azzoni, Nicolas Picot, Herve Yesou
Affiliations: ICube Sertit, INRAE, HydroMatters, CNES
For several decades now, earth observation data has been integrated into services to support relief efforts following natural or man-made disasters, such as the International Charter on Space and Major Disasters, or the European Commission's Copernicus Rapid Mapping Service (EMS). In the case of floods, they can be used to map the extent of the flooding and to characterize the areas affected. These maps can help in decision-making at the highest levels and in making the best possible use of human and material resources. However, current Sentinel1/2 or higher resolution images data such as Pleiades NEO, CSK or TSX, do not allow us to characterize the hazard in its entirety. In fact, of the key variables for defining the flood hazard, only the extension parameter and the duration of submersion are directly accessible from earth observation data. What is missing are the important parameters of water height and current speed, which will determine how dangerous an event will be, bearing in mind, for example, that a water height of 50 cm associated with a current of 0.5 m/s represents a great danger to people, or that a car starts to float at a depth of 30 cm. The recent innovative SWOT mission could go some way to filling this gap by deriving information about water levels. This is what we are proposing to evaluate, obviously bearing in mind the scientific nature of the mission, which is far removed from the expectations of an operational mission in terms of revisit, responsiveness and access to NRT data. As a reminder, the Surface Water and Ocean Topography (SWOT) satellite mission), a joint endeavor between NASA and the French Space Agency CNES, is set to revolutionize our understanding of Earth’s water cycle by providing unprecedented data over continental water bodies and oceans. Launched in December 2022, defined to monitor water courses over 100m wide, and water bodies with a surface area greater than 6.25 ha, on a global scale. SWOT measures the elevation of water bodies with exceptional accuracy and resolution, offering a comprehensive view of surface water dynamics across the globe at scales never before achieved from space. Depending the location on the swath and the orbits crossing, over a given area, revisit during a 21 day cycle can reach, at temperate latitudes, up to 4 observations. This allows to give the opportunity to catch some flood events. Thus, few events of various extents have been identified and corresponding SWOT data collected, as well as those acquired under normal hydrological conditions. For each site, when available exogeneous information have been gathered such as, water levels timeseries from gauge stations, HR DEMs, quasi synchronous SAR and/or optical imagery, as well for few cases, EMS Rapid mapping products. Results from two events with different typology are presented. The first one corresponds to the cross border Sarre River flood event in May 2024, for which Copernicus EMSR has been triggered both over France and Germany, ie EMSR722 and 733. The second corresponds to the dramatic flood event affecting the Valencia province (Spain) in October 2024, EMSR 773 and Charter 924, particularly over the coastal plain of Albufera. The analysis of the SWOT data was carried out over the PICX product particularly exploiting the classes 3 and 4 of the pixel cloud corresponding respectively to water-near-land and open water. In the first stage, the distribution and location of the PICX classes were analyzed and compared with the quasi-synchronous images and the Delineation EMSR products. In a second stage, the altitude of the water bodies (water surface elevation, WSE) was compared with the in situ gage values. Finally, an analysis of the fine topographic data, ie Lidar DEM, was carried out to analyze the highest water altitudes as observed by SWOT and to assess the heights of submersion. Although the SWOT system, as mentioned above, was developed for a global approach focusing on rivers over 100m wide, as has already been shown on small rivers, the detection and recognition of bodies of water means that overflows can be observed up to a total width of around fifty meters. Below this width, it becomes difficult/hard to recognize a flooded area, or this flooded area appears discontinuous. The discontinuity of the flooded zone over small areas can be linked to several factors. Firstly, the riparian vegetation bordering the watercourse limits observation capabilities. There may also be backwater phenomena on very smooth water. The larger spreading of water on the surface is well observable. In term of WSE, the PICX, from the L2_HR_PIXC product presents a very coherent information in term of altitude as well as the derived water depth. Soon, we will continue the comparison with result coming from simplified 2D hydrodynamic modeling as well as those derived from forecasting models used by flood warning services
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Combining S6 FFSAR and SWOT Data to Achieve Near Ground-Accurate Water Extent and Level Measurements for Terrestrial Water Storage Targets From Spaceborne Measurements

Authors: Salvatore Savastano, Ollie Holmes, Adrià Gómez Olivé, Ferran Gibert, Maria José Escorihuela
Affiliations: isardSAT Ltd., isardSAT S.L.
Freshwater resources provide the backbone of civilisation, and their careful management partly led to the rise of some of the most prominent civilisations in history, making the management of this vital resource critical to continuing prosperity. However, the influence of climate change is increasing the complexity of this management; while the total amount of precipitation across a region has remained relatively constant, the stability of rainfall has changed with intense downpours and droughts in different areas [1]. Therefore, managing freshwater resources is becoming a regional problem rather than a historically local problem. While networks of in-situ gauges exist, their installation in every terrestrial water storage location is impractical, especially in remote areas and less-developed countries, hindering comprehensive regional analysis. However, the combination of Sentinel-6 (S6) data processed through the Fully Focused Synthetic Aperture Radar (FFSAR) processor and Surface Water and Ocean Topography (SWOT) data can offer near ground-accurate data for entire regions, regardless of cloud cover, and with sufficiently short revisit times for capturing the dynamic changes in water bodies over time. This not only enhances scientific data for quantifying the impact of climate change on terrestrial water storage but also provides a regional analysis of freshwater resources for governments to optimise their management, potentially allowing for the capability of allocating resources from areas of surplus to areas of scarcity. This study investigates multiple targets across the Ebro basin, comparing each mission's water level measurements to in-situ gauges provided by SAIH Ebro and water extent measurements to optical data. Due to the different measurement approaches, the results have differing degrees of accuracy and sources. On the one hand, S6 carries a nadir-looking altimeter, with an across-track coverage of approximately 0-10 km on either side of the nadir. Data from S6 was collected at the L1A stage and processed through the FFSAR ground processor and subsequent algorithms developed by isardSAT. The FFSAR processor provides a significant advantage over previous processors as phase information can align multiple along-track measurements to the same along-track cell, increasing the resolution up to the theoretical limit of 0.5 m while simultaneously reducing noise power and enhancing the signal-to-clutter ratio. On the other hand, SWOT is a side-looking SAR mission with an across-track coverage of approximately 5-60 km on either side of the nadir. SWOT data was collected from the L2 HR PIXC products provided by JPL/CNES [3]. While the PIXC product provides water level measurements, water extent was extracted and validated against optical data and LakeSP results from the CNES team [4] for a customised analysis. The nadir look angle and direct time measurement approach of the S6 FFSAR data provide superior height measurements over SWOT's interferometric phase measurements. While SWOT does provide increased coverage, using Sentinel-3 and CryoSat-2 data, both capable of undergoing FFSAR processing and S6 data, should provide sufficient coverage. However, SWOT's off-nadir look angle and dual channel power measurements provide superior water extent measurements with significantly increased coverage. A result of stringent requirements for S6 FFSAR extent measurements, which require targets to be sufficiently off-nadir to preserve across-track resolution and prevent iso-range ambiguities while also needing to be adequately nadir to preserve adequate signal-to-clutter ratios, severely limiting coverage. Furthermore, Sentinel-3 and CryoSat-2 produce distorted water extent results due to aliasing issues from their closed-burst transmission mode. Utilising altimeter data with FFSAR processing for water level measurements improves accuracy by approximately 5-10 times over SWOT data with sufficient coverage when using multiple missions. Similarly, utilising SWOT data for water extent measurements improves accuracy by approximately 100 times over S6-FFSAR data with superior coverage. Therefore, the synergism of altimeter data with FFSAR processing and SWOT data can provide near ground-accurate water extent and level measurements for most terrestrial water targets. [1] Richard P Allan. Amplified seasonal range in precipitation minus evaporation. Environmental Research Letters, Volume 18, Number 9. August 2023, DOI: 10.1088/1748-9326/acea36 [2] Adrià Gómez Olivé, Ferran Gibert, Albert Garcia-Mondéjar, Charlie McKeown, Malcolm MacMillan, Michele Scagliola. “Inland Water Extent Measurements for the CRISTAL Mission.” 30 Years of Progress in Radar Altimetry Symposium. 2-7 September 2024 | Montpellier, France. [3] Brent Williams. JPL/CNES. PIXC Validation, SWOT Validation Meeting, Chapel Hill, NC. June, 2024. [4] Claire Pottier, Roger Fjortoft. CNES. Lake Product Validation, SWOT Validation Meeting, Chapel Hill, NC. June, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Long swells and extreme storms: SWOT level 3 wave spectra for the calibration of climate extremes

Authors: Fabrice Ardhuin, Taina Postec, Guillaume Dodet, Beatriz Molero, Adrien Nigou
Affiliations: LOPS, CLS
As ocean altimetry is pushed to higher and higher resolution, the SWOT Low Rate ocean products, posted at 250 m resolution, resolves many processes that contribute to the surface elevation, including wind-generated waves with wavelengths longer than 500 m. A preliminary analysis (Ardhuin et al., Geophys. Res. Lett, 2024, https://doi.org/10.1029/2024GL109658 ) has revealed that SWOT is capable of measuring these long swells, even with significant wave heights as low as 3 cm, which is a unique capability for any open ocean measurements (the noise floor of in situ drifting buoy is close to 10 cm). As a result, the long swells that radiate from the most extreme storms are clearly visible in SWOT data, providing a unique opportunity for validating our understanding of extreme storm evolution and calibrating models and other measurement records. For this purpose CNES has supported the design and production of dedicated SWOT Level 3 wind-wave products that contain estimates of spectra for waves longer than 500 m. In our analysis we particularly focus on storm Bolaven in October 2023, the most severe storm of the year 2023, with wave heights exceeding 20 m in the North Pacific according to numerical models, and radiated swells with periods up to 26 s (a wavelength of 1200 m), and storm Rosemary (June 2023). We particularly analyze the swells that propagate directly from the storms to the SWOT measurement locations, and swells reflected off the coasts of central and south America in the case of Bolaven. The spatial pattern of swell radiated from the storm is generally broader than predicted by models using exact non-linear interactions, suggesting some unknown scattering effects. The far field propagation across ocean basins also reveals deficiencies in usual numerical propagation schemes and some possible biases in swell dissipation estimates. At present swells too close to the storm are apparently too steep to be properly imaged by KaRIN and some correction of the swell amplitude will be needed. Finally, once a dissipation model is adjusted, we can link the far field energy radiated from the storm to the storm intensity. This relationship is used to rank the intensity of the most severe ocean storms. A similar approach could be used using the wide global coverage of seismic stations using primary microseisms (Aster et al. Nature Communications, 2023, https://doi.org/10.1038/s41467-023-42673-w ) once their amplitude has been calibrated to a swell amplitude using SWOT L3 spectral data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.06.06 - POSTER - Global Digital Elevation Models and geometric reference data

One of the most important factors for the interoperability of any geospatial data is accurate co-registration. This session will be dedicated to recent developments and challenges around the availability, quality, and consistency of reference data required for accurate co-registration of remotely sensed imagery at global scale. It will provide a forum for the results of studies performed under CEOS-WGCV, EDAP, and other initiatives aiming at the quality assurance and harmonisation of DEMs, GCPs and reference orthoimagery used by different providers worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: AI-Driven Landslide Susceptibility and Hazard Mapping for the CopernicusLAC Hub

Authors: Caterina Peris, Davide Colombo, Paolo Farina, Michael Foumelis
Affiliations: Indra Espacio S.l.u., Geoapp, Aristotle University of Thessaloniki (AUTh)
The Copernicus Latin America and Caribbean (LAC) initiative leverages Earth Observation (EO) data to enhance Disaster Risk Management (DRM) and Disaster Risk Reduction (DRR) across one of the world’s most disaster-prone regions. Through the Copernicus LAC Hub in Panama, it provides scalable EO services, such as terrain motion analysis, using open-access Sentinel data and co-developed methodologies tailored to local needs. This initiative sets the stage for advanced applications such as AI-driven landslide susceptibility and hazard mapping. Landslide susceptibility refers to the likelihood of landslide occurrences in a given area, determined based on local terrain conditions, geological characteristics, and triggering factors such as rainfall or seismic activity. Assessing landslide susceptibility is a vital step in understanding and mitigating risks, especially in regions prone to geological hazards. Traditional methods for evaluating susceptibility can be time-consuming and subject to human bias, which limits their scalability and accuracy. In this work, we introduce an AI-driven approach leveraging the Random Forest algorithm to automate the calculation of landslide susceptibility. This machine learning method enables the analysis of complex relationships between various influencing factors, providing a robust and scalable solution for susceptibility assessment. Key input datasets for the model include high-resolution Digital Terrain Models (DTM), which capture topographic features; geological maps, to detail bedrock properties; and lithological data, describing soil and rock types. These datasets are processed to extract key attributes such as slope gradient, aspect, curvature, drainage density, and lithological composition. The Random Forest algorithm is trained on historical landslide data to classify terrain areas into susceptibility categories, offering predictions with high accuracy and reliability. To enhance the utility of the susceptibility model, we integrate it with Interferometric Synthetic Aperture Radar (InSAR) measurements. InSAR allows capturing subtle motion of the surface providing valuable information on the activity status of landslides. This integration bridges the gap between static susceptibility models and dynamic ground monitoring, creating a more comprehensive hazard mapping framework. The combined methodology presents a transformative approach to landslide hazard assessment, enabling proactive risk management. The resulting hazard maps delineate geotechnical domains into distinct zones with defined “attention levels”, representing the urgency and priority of monitoring or intervention in each area. The maps can guide land-use planning, emergency preparedness, and infrastructure development by highlighting critical zones requiring attention. Furthermore, the integration of AI and remote sensing enhances the efficiency and objectivity of the assessment process, ensuring scalability for large and diverse regions, by reducing reliance on domain-specific human expertise and minimizing potential biases inherent in traditional methods. This work demonstrates the potential of machine learning and geospatial technologies in advancing the accuracy, automation, and applicability of landslide hazard analyses, supporting the development of safer, more resilient communities in vulnerable areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An introduction to Sen2VM: an Open-Source tool for geocoding the Sentinel-2 Level-1B products

Authors: Antoine Burie, Jonathan Guinet, Marine Bouchet, Guylaine Prat, Sara Teraitetia, Emmanuel Hillairet, Silvia Enache, Rosalinda Morrone, Valentina Boccia
Affiliations: Cs Group, Starion for ESA ESRIN, European Space Agency, ESRIN
The Copernicus Sentinel-2 is a European mission that acquires wide-swath (290km), high-resolution (10m max), multi-spectral (13 bands) images of the Earth. The wide swath is ensured by 12 staggered detectors, overlapping each other’s. Due to their worldwide regular acquisitions and accurate geolocation (<5m CircularError90), Sentinel-2 satellites has been offering since several years now a massive quantitative and qualitative resource for the Earth observation community. Each Sentinel-2 acquisition is generated at multiple levels, with higher levels indicating greater modification or enhancement through scientifical algorithms (from Level-1B radiances in sensor geometry to Level-1C orthorectified Top Of Atmosphere reflectance, and then Level-2A surface reflectance corrected from atmosphere effects). Since the beginning of the mission, the only publicly available levels were the Level-1C and Level-2A. Level-1B products (in sensor geometry) require a high level of expertise, mainly to handle the georeferencing of the product. However, L1B products can be of great interest for users who want to: • Manage their own orthorectification, using their own Digital Elevation Model (DEM), • Reproject the data in their own projection, • Have access to the whole overlapping area between detectors (the overlapped information is not reachable in L1C/L2A products because a choice is done, and only the radiance of one detector is provided). This presentation will introduce “Sen2VM”, an Open-Source tool that integrates the Sentinel-2 viewing models and enables the geocoding of Level-1B products. This tool was designed to simplify and enlarge the possibilities of use of the Level-1B Sentinel-2 products. The tool is composed of several parts: • A stand-alone tool, allowing to generate geolocation grids that will be included in the L1B product. The main purpose is to generate direct location grids over the whole Level-1B product going from the whole product to the ground. In addition, the tool will also allow generating inverse location grids, i.e. going from a ground area to the L1B product, • A SNAP plugin, calling directly the stand-alone tool, allowing the same generations of grids, • A GDAL driver, i.e. a contribution to GDAL (https://gdal.org/en/stable/) allowing to handle L1B products with direct location geolocation grids inside them and offering all the GDAL capabilities of reprojections of L1B Sentinel-2 data. The presentation will address the following topics: • Overview of the Sen2VM tools and the required inputs, • Example of application: presentation of the resampling possibilities and performances using the generated geolocation grids, • Publication (GitHub, etc.) – so basically how users can access to the tool.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: TanDEM-X DEM 2020: Product release and quality assessments

Authors: Birgit Wessel, Carolin Keller, Larissa Gorzawski, Martin Huber, Thomas Busche, Marie Lachaise
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center, Company for Remote Sensing and Environmental Research (SLU), German Aerospace Center (DLR), German Remote Sensing Data Center, German Aerospace Center (DLR), Remote Sensing Technology Institute
This contribution introduces the newly developed global TanDEM-X DEM 2020 data set. The German TanDEM-X mission, consisting of two satellites operating in close formation since 2010, serves as the foundation for generating bistatic interferometric SAR data. Following the conclusion of data acquisition for the initial global TanDEM-X Digital Elevation Model (DEM) between 2010 and 2014 [1], the TanDEM-X mission systematically acquired data mainly between September 2017 and mid-2021 to create a new global DEM, referred to as the “TanDEM-X DEM 2020”. The main focus of this presentation lies in the production process and preliminary results of global evaluation measures. The primary distinctions between the TanDEM-X DEM 2020 and its predecessor (2010-2014) are the new and independent time frame and an updated interferometric processing technique which minimizes phase unwrapping errors, allowing for a mainly single-coverage acquisition strategy except for challenging terrain. Each DEM scene is pre-calibrated against the global TanDEM-X DEM optimizing the DEM calibration process while maintaining exceptional accuracy. Key features of digital elevation models are performance and accuracy, therefore the TanDEM-X DEM 2020 will be quantitatively assessed against reference data including ICESat, ICESat-2, GPS tracks and, importantly, the first global TanDEM-X DEM. Additionally, a qualitative evaluation is conducted exemplarily on selected sites, highlighting the advantages of interferometric DEMs: global coverage, high accuracy and homogeneity. In this contribution we present the production process, first quality assessments and release of a new global digital elevation dataset: TanDEM-X DEM 2020. This DEM offers an actual topographic dataset and facilitates global-scale monitoring of topographic changes through comparisons with the earlier TanDEM-X DEM. The TanDEM-X DEM 2020 product is expected to become publicly available to the scientific community by mid-2025 via DLR’s EOWEB portal. References: [1] Rizzoli, P., Martone, M., Gonzalez, C., Wecklich, C., Borla Tridon, D., Bräutigam, B., Bachmann, M., Schulze, D., Fritz, T., Huber, M., Wessel, B., Krieger, G., Zink, M., and Moreira, A. (2017): Generation and performance assessment of the global TanDEM-X digital elevation model, ISPRS J. Photogram. Remote Sens., 132, 119–139.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: WorldDEM Neo - The new reference in global elevation

Authors: 3D Data Development Manager Ernest Fahrland, Product Management Elevation Henning Schrader, Strategic Future Programs Ciro
Affiliations: Airbus
Digital Elevation Models (DEM) represent a core dataset within the geospatial domain and the on-going German TanDEM-X mission provides the solid basis for a truly global and consistent DEM coverage. The WorldDEM and its derivative Copernicus DEM successfully replaced SRTM as the global elevation reference mode. Another 10 years of TanDEM-X operations laid the foundation for a next generation of WorldDEM: the WorldDEMneo, providing 4x times better resolution and improved height accuracy. WorldDEM Neo will be the only global, consistent and analysis-ready DEM reference of European origin until the mid 2030s. Accelerating challenges such as global warming, urbanization and de-forestation call for improved core data sets to model and mitigate climate change impacts both on global and local levels. The WorldDEM Neo has got the potential to provide a significant evolution of the current Copernicus DEM with benefits for the Copernicus services in terms of modelling and orthorectification of future acquisitions of satellite data with ever increasing ground resolution and a strong need to precise geolocation, enabling time-series / data cube analysis. The new WorldDEM Neo has been produced based on continued and systematic TanDEM-X mission acquisitions on a global scale (2018-2020). The temporal footprint is 4-5 years more up-to-date than the previous global standard WorldDEM (acquired 12/2010 to 01/2015) and the resolution is 4-times better (now 5 x 5 m² instead of 10 x 10 m², as used for the current most-detailed version of the Copernicus DEM). The absolute, vertical accuracy of the WorldDEM Neo DSM has been assessed on global scale with a linear error better than 1.5 meters (90% confidence level for ICESat-2 ATL08 terrain reference points). A validation of the corresponding DTM layer, using globally distributed LIDAR reference, is an on-going effort. The continuing TanDEM-X mission changed focus in 2020 from global to regional/continental level on scientifically relevant ecosystems. The new acquisitions after 2020 of e.g. the tropical regions (biosphere), glaciers & permafrost regions (cryosphere) as well as urban areas (anthroposphere) support the various applications of DEM data such as environmental, de-forestation & urban growth monitoring, hydrological modelling, disaster management, infrastructure planning, natural resource management or the orthorectification of satellite imagery. The unique data quality and worldwide availability makes WorldDEM Neo the most robust layer model for risk assessments & management and for investigating global phenomena. Answering two main user needs, the WorldDEM Neo Digital Surface Model (DSM; incl. objects on the ground such as buildings & vegetation) is now accompanied by an Digital Terrain Model (DTM; with objects on the ground removed). Secondly, both datasets are based on fully automated, parameterizable and scalable production processes. This allows for better modelling of geo-biophysical parameters shortly after raw data acquisition, alongside other space-based information data such as Sentinel imagery and the Copernicus Contributing Mission data. Spatio-temporal analysis with a consistent sensor source on global scale is now possible by combining WorldDEM Neo with the previous WorldDEM. Previous radar-based DEMs such as the Shuttle Radar Topography Mission global layer (acquired 02/2000), but also the WorldDEM (primary input for Copernicus DEM) required huge manual efforts to achieve an error-free status. The editing of both datasets consisted of semi-automated and manual working steps and lasted for several years, i.e. a direct use of the fresh acquisition data was delayed accordingly. The presentation will provide an insight into the fully automated production process of global, up-to-date, consistent, radar-based DSM & DTM datasets called WorldDEM Neo. Different use cases will accompany the presentation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improving global DEMs from interferometry with smart DEM data fusion: a case study in urban landscapes

Authors: Ernest Fahrland, Hanne Liebchen, Jachin Jonathan van Ek, Henning Schrader
Affiliations: Airbus Defence and Space, Airbus Defence and Space
By 2030, 60% of the global population will live in urban areas with on-going urbanization and growth of urban areas. Notably, the urban extent is less than 2% of the global landmass, indicating the importance of spatial data (2D/3D) for the small urban footprint on Earth’s surface with sufficient quality. Digital Elevation Models (DEMs) ever since represent a core dataset within the geospatial domain and support government needs for urban planning, mapping, through modeling and simulations of to-be cities and their infrastructures. DEM data also sets foundations for Digital Twins of cities. Common global DEM datasets such as the Shuttle Radar Topography Mission dataset (SRTM, acquired in 02/2000) but also the newer WorldDEM and its derivative Copernicus DEM (acquired from 12/2010 to 01/2015) contain height information of a fixed, well defined acquisition timeframe. Here, interferometric techniques represent the best methodology to create digital elevation information on global scale. Unfortunately, dense urban environments are represented with insufficient vertical accuracy due to the presence of double-bounce effects of the radar measurement. In addition, on-going topographic changes occurring after data acquisition are not captured and do negatively affect any analysis requiring up-to-date height information. The German TanDEM-X mission ended its data acquisition for the current WorldDEM / Copernicus DEM in January 2015 but continues to operate with two X-band SAR satellites flying in a close, bistatic formation. Since this time, the mission produced more up-to-date height information of global extent which lead to the new WorldDEM Neo dataset (acquired until 2020). Following this global update, continental to regional acquisitions since 2020 (until today) lead to a continuous flow of bi-static data which allow for 4D change analysis based on interferometric processing algorithms with persistent sensor technology. To complement for local deficiencies in dense urban landscapes, different data acquisition methods such as stereo analysis of satellite data from optical sensors (triangulation of e.g. Pléiades or Pléiades Neo imagery) or new evolving techniques such as SAR image height reconstruction based on machine learning techniques are required to locally improve the global layers where they show their greatest weakness(es) The smart integration of this more accurate and more detailed height information into a global elevation database such as WorldDEM Neo will create a truly global, consistent, accurate and homogenous elevation database. The integration process is script-based, performant and parametrizable and represents a first step to database of “living topography” with the strict rule of always raising local data accuracy above the WorldDEM Neo DSM values The presentation will provide an insight into the local/regional integration of high-resolution DEM data from SAR image height reconstruction and from optical sensors into the global WorldDEM Neo database.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improving ECOSTRESS’ absolute and relative georeferencing for optimisation of crop and irrigation products

Authors: Agnieszka Soszynska, Jan Hieronymus, Darren Ghent
Affiliations: University of Leicester, National Centre for Earth Observation, constellr GmbH
ECOSTRESS is currently the only source of high-resolution thermal imagery apart from ASTER (which is at its end-of-life). ECOSTRESS will continue to play a crucial role for the time before three major thermal missions appear (LSTM, TRISHNA, and SBG), delivering Land Surface Temperature (LST) operationally, and all the high-level derived products, such as evapotranspiration. Therefore, the scientific community as well as the emerging thermal remote sensing service industry rely heavily on ECOSTRESS imagery. However, ECOSTRESS imagery is affected by image-quality issues. One of the most crucial issues is an inaccuracy in the absolute georeferencing of the images. The standard georeferencing of ECOSTRESS images is based on matching to a static reference database and manages to successfully process approximately 38% of all the scenes (including cloudy scenes). In these cases, a small average error of 48 metres (< 1 pixel) is observed and a spread of 20 to 100 meters remains in the majority of the analysed scenes, as reported by NASA-JPL. If the matching fails (62% of the scenes), large errors are observed. Previous studies reported errors of 14 pixels (980 m) on average. In standard processing of thermal imagery, matching procedures are typically conducted using static basemaps created from VIS-NIR imagery. However, georeferencing of thermal imagery is challenging due rapid changes of the heat distribution across the terrain throughout the day, and the fact that the land cover of a static basemaps is quickly outdated, causing the matching to fail more often. We propose a solution, by creating an up-to-date basemap for matching, which consists of a mosaic of Sentinel-2 imagery, which is acquired temporarily close to the ECOSTRESS image. Thus, an up-to-date reference is created separately for each to-be-processed ECOSTRESS scene. Such an approach saves on archiving a global reference dataset (such as the Landsat Orthobase that NASA-JPL uses for their standard processing of ECOSTRESS imagery), and accounts for land cover changes in the imaged area (which is not given by using a static reference database). The created mosaics are used to find the optimal matching products. We compare the most suitable candidates, e.g., high-pass filtered Sentinel-2 imagery, which allows detecting structural/geometric features and borders of objects, which should be equally visible in VIS-NIR ranges as in TIR ranges imaged by ECOSTRESS. Another issue of ECOSTRESS’ geometry comes from the rotating mirror inparallelity, which results in a slight offset of the adjacent scans in each image. We statistically describe the offset in order to derive a set of parameters to remove the offset and make the relative georeferencing homogeneous. The conducted work allow improving ECOSTRESS image products, which bridge the gap until the high resolution future thermal missions become operational.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: AI driven detection of local errors and local 3D features in global DEMs

Authors: Oliver Lang, Pawel Kluter, Schulze Michael
Affiliations: Airbus Defence and Space
Digital Elevation Models (DEM) represent a core dataset within the geospatial domain. Global datasets, like the Copernicus DEM or the more recent WorldDEM Neo provide seamless 3D information all over the globe. These global products are created from high resolution SAR earth observation data of the TanDEM-X mission leveraging highly automated and scalable processing. While the absolute and relative accuracy of these models is high and well documented (see for example [1]), specific local artifacts are still present in the data sets. In particular, the applied automated correction processes lack the detection and editing of local errors related to vertical features, like power pylons. As a consequence, the impact of these objects in the automated interferometric DEM generation process might remain as artifacts in the final product resulting in a spatial sequence of local depressions. The presented approach is based on a mature deep learning technology allowing for automatic detection of those artifacts in high resolution DEMs and for subsequent correction. As detector we applied a proprietary deep learning network architecture optimized for complex scenarios with multiple classes at multiple scales. The model training was based on an extensive set of representative labels taken from the global WorldDEM Neo elevation model. It is shown that this approach provides an effective way for automatic quality assurance, detection and elimination of point artifacts in Digital Elevation Models. A secondary goal of the AI driven detection approach is the automatic detection and classification of prominent features of interest in a global DEM. This comprises geological features like craters, domes, depressions and manmade objects like dams, mining areas. It is shown that the methodology provides the potential to generate additional insights about location and type of 3D features and consequently adds value to large scale DEMs. Reference: [1] Copernicus Digital Elevation Model Validation Report, Tech. rep., AIRBUS Defence and Space GmbH,https://spacedata.copernicus.eu/documents/20123/121239/GEO1988-CopernicusDEM-RP-001_ValidationReport_I3.0.pdf (last access: 02 December 2024), 2020
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.06.01 - POSTER - Sentinel-1 mission performance and product evolution

The Sentinel-1 mission is a joint initiative of the European Commission (EC) and the European Space Agency (ESA) comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band synthetic aperture radar imaging, enabling them to acquire imagery regardless of the weather. The C-band SAR instrument can operate in four exclusive imaging modes with different resolution (down to 5 m) and coverage (up to 400 km). It provides dual polarization capability, short revisit times and rapid product delivery. Since the launch of Sentinel-1A and Sentinel-1B, respectively in 2014 and 2016, many improvements were brought to the mission performances and to the products evolved on many points. Sentinel-1B experienced an anomaly which rendered it unable to deliver radar data in December 2021, and the launch of Sentinel-1C is planned for 2023. This session will present the recent improvements related to a) the upgrade of the products characteristics, performance and accuracy, b) the better characterization of the instrument with the aim to detect anomalies or degradation that may impact the data performance, c) the anticipation of the performance degradation by developing and implementing mitigation actions and d) the explorative activities aiming at improving the product characteristics or expanding the product family to stay on top of the Copernicus Services evolving expectations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating Remote Sensing and Geospatial Analysis to Assess Environmental and Climatic Vulnerability in Urban Mediterranean Contexts: A Case Study of Valencia

Authors: Carlos Rivero Moro, Mª Amparo Gilabert Navarro, Ana Pérez-Hoyos, Ernesto López Baeza
Affiliations: Dept of Environmental Remote Sensing, Faculty of Physics, University of Valencia, Burjassot 46100, Spain, Dept of Environmental Remote Sensing, Faculty of Physics, University of Valencia, Burjassot 46100, Spain Albavalor, Science Park University of Valencia, Paterna 46980, Spain, Albavalor, Science Park University of Valencia, Paterna 46980, Spain
Urban areas are increasingly vulnerable to the impacts of climate change, particularly in Mediterranean cities like Valencia, which face compounded pressures from overbuilding, traffic, air pollution, and intensifying heat waves and extreme precipitation events. This study introduces a comprehensive framework to assess Valencia's environmental and climatic vulnerability, integrating advanced remote sensing tools, geospatial analysis, and air pollution data to guide targeted environmental and health policies. Utilizing high-resolution Sentinel-2 and Landsat data, three key environmental dimensions were analyzed: vegetation coverage, carbon sequestration potential, and urban heat. The Leaf Area Index (LAI) was derived to quantify vegetation density and its regulatory role in microclimatic stability. The fraction of absorbed photosynthetically active radiation (fAPAR) was used to estimate the carbon sequestration potential of green areas, while the intensity of urban heat island (UHI) was mapped using Landsat thermal data. Additionally, spatial patterns of air pollution were assessed using concentrations of PM2.5, PM10, and NO2 as key indicators of traffic-related and industrial emissions. A composite spatial indicator of vulnerability was developed by integrating these variables through Geographically Weighted Principal Component Analysis (GWPCA). The natural breaks method was applied to define risk classes, enabling the identification of vulnerability hotspots. The analysis revealed that a substantial proportion of Valencia's population resides in areas with high or very high vulnerability, emphasizing disparities between the urban core and peri-urban areas, making the city more vulnerable to the consequences of climate change, such as extreme precipitation or temperature events. The study provides a novel and replicable approach to mapping climatic vulnerability at a city-wide scale by integrating biophysical, thermal, and air pollution data. This framework identifies high-risk areas and populations, highlighting the interplay between vegetation, thermal stress, air pollution, and carbon sequestration. These insights offer a robust foundation for designing policies that address environmental and climatic inequalities, reduce exposure to air pollution, and enhance urban resilience in Valencia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sentinel-1C and Sentinel-2C Precise Orbit Determination Commissioning Results

Authors: Carlos Fernandez Martin, Sonia Lara Espinosa, Oleksandr Ivanchuk, Jaime Fernandez Sanchez, Heike Peter, Muriel Pinheiro
Affiliations: GMV Aerospace & Defence, PosiTim UG, ESA/ESRIN
The Copernicus Precise Orbit Determination (CPOD) Service delivers, as part of the Ground Segment of the Copernicus Sentinel-1, -2, -3, and -6 missions, orbit products and auxiliary data files for the operational generation of the science core products in the corresponding Production Services (PS) at ESA and EUMETSAT, and to external users through the newly available Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu/). The recent launches of Sentinel-1C and Sentinel-2C at the end of 2024 mark significant milestones in the Copernicus program, necessitating rigorous calibration and validation (CalVal) activities to ensure the precision and reliability of their Precise Orbit Determination (POD). This contribution presents the commissioning results of these satellites, focusing on the comprehensive activities undertaken. At the time of writing this abstract, Sentinel-1C has not been launched yet so results will be preliminary and pending on a successful commissioning. As part of the POD commissioning, initial orbit determination solutions are generated using the same configuration as for their Sentinel-1 and -2 predecessors, providing a baseline for further calibration and validation. The Level-0 decoding capabilities of the satellite signals are then thoroughly verified to ensure data integrity and accuracy. The launch of these satellites marks a key milestone, augmenting the number of PODRIX receivers in orbit tracking routinely both GPS and Galileo, together with Sentinel-6A. Moreover, GNSS antennae calibration is conducted to mitigate multipath effects by generating a preliminary Phase Center Variation map in the usual ANTEX file. To validate the accuracy of the preliminary orbit solutions, cross comparisons are conducted with solutions provided by the CPOD Quality Working Group (QWG), which includes esteemed institutions such as AIUB, DLR, TU Delft, TU Munich, and GFZ, among others. These comparisons ensure consistency and reliability across different processing centers. The commissioning results demonstrated that the preliminary orbit determination solutions for Sentinel-2C met the stringent accuracy requirements of the Copernicus program, and it is expected that the same conclusion is reached for Sentinel-1C. The calibration of the antennae and verification of Level-0 decoding capabilities further enhanced the reliability of the POD products. Cross comparisons with CPOD QWG solutions confirmed the robustness and precision of the orbit determination process. Building on the commissioning results, future work will focus on continuous monitoring and refinement of the POD solutions for Sentinel-1C and Sentinel-2C. Additionally, lessons learned from these activities will inform the commissioning of future Sentinel satellites, ensuring ongoing improvements in the precision and efficiency of the CPOD Service. This presentation will provide a detailed overview of the CalVal activities, highlight key findings from the commissioning results, and discuss the implications for future Copernicus missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New Product Evolution Of ESA’s Extended Timing Annotation Dataset (ETAD) For Sentinel-1 Mission

Authors: Victor Navarro Sanchez, Christoph Gisinger, Helko Breit, Ulrich Balss, Steffen Suchandt, Lukas Krieger, Thomas Fritz, Antonio Valentino, Muriel Pinheiro, Guillaume Hajduch
Affiliations: German Aerospace Center (DLR), Rhea for ESA, European Space Agency (ESA), ESRIN, Collecte Localisation Satellites (CLS)
SAR remote sensing is a powerful tool for Earth observation, supporting a wide range of applications thanks to its night and day observation capabilities and its excellent geometric accuracy. These include interferometric applications (InSAR), where the differential phase obtained from images of the same area acquired with a different geometry and/or at a different time instant is exploited to reconstruct, for instance, the scene topography or deformation over time. SAR measurements are, however, affected by the spatial and temporal variability of atmospheric conditions, solid Earth dynamic effects, and approximations during image processing. If not corrected, these effects can produce geometric shifts of up to several meters. In order to facilitate Sentinel-1 (S-1) SAR data corrections, bringing their geometric accuracy from meters down to centimetres, the Extended Timing Annotation Dataset (ETAD) was developed in a joint effort by ESA and DLR [1][2]. The ETAD product provides easy-to-use gridded timing corrections for S-1 level-1, single-look complex (SLC) data, following the radar geometry of the associated SLC product (range-time, azimuth-time). At the time of writing, ETAD products from July 21th, 2023 onwards can be retrieved via the Copernicus Data Space Ecosystem. Following positive feedback from the expert users who participated in S-1 ETAD pilot study activities, acknowledged below, an extension of the S1-ETAD baseline product to cover a wider range of applications has been investigated in the context of ESA-funded activity ”Scientific Evolution of the S1-ETAD product” (ETAD-SE). Of the experimental features prototyped and evaluated in the ETAD-SE activity, the following features have been selected for inclusion in the operational ETAD processor in the next major release (3.0): • New correction layer: ocean tidal loading (OTL) corrections in range and azimuth • New supportive layer: tropospheric delay gradient layer with respect to height changes • Bit quantization of correction layers in ETAD NetCDF to reduce product file sizes Ocean tidal loading is a wide-area deformation effect caused by the tidal redistribution of ocean water mass which loads and deforms the solid Earth in coastal regions by up to 10 cm. OTL corrections are expected to improve geometric accuracy in affected coastal regions, also reducing the stochastic error in time series analysis [3]. The tropospheric delay derivative with respect to height is an auxiliary layer to support interpolation of tropospheric delay corrections, highly dependent on surface height, to a new grid with different sampling of the underlying topography. This is useful, for instance, for InSAR applications where secondary products must be aligned (coregistered) to the primary product and, consequently, corrections must be re-evaluated for the common InSAR grid height values [4]. Finally, the bit quantization feature would allow removing non-significant digits from selected layers, ensuring that relevant information is kept, which in combination with data compression algorithms will reduce data size, thus compensating for additional layers in the product. Current product size is in the order of 100 MB. The implementation and qualification of these new features is foreseen within Q1/2025, in the context of Mission Performance Cluster (MPC) service activities. The new version of the ETAD processor (3.00) is planned to become operational in the S-1 ground segment before May 2025 along with the introduction of Sentinel-1C unit. Our contribution at the LPS’25 conference will present the extended ETAD product, together with use-case scenarios and the status of operational production. Acknowledgements The authors thank all the research groups that participated in the ETAD pilot study in 2022, for their valuable feedback on the product when applying it in SAR applications such as offset tracking, InSAR processing, data geolocation and geocoding, and stack co-registration. The ETAD processor was hosted in the Geohazard Exploitation Platform to allow for processing by the pilot participants and the hosting was supported by ESA Network of Resources Initiative. List of participating institutions in alphabetical order: Caltech, DIAN srl, DLR, ENVEO, IREA-CNR, JPL, Joanneum Research, NORCE, PPO.labs, TRE ALTAMIRA, University of Jena, University of Leeds, University of Strasbourg. The S1-ETAD scientific evolution study, contract No. 4000126567/19/I-BG, was financed by the Copernicus Programme of the European Union implemented by ESA. The results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG. Copernicus Sentinel-1 mission is funded by the EU and ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. [1] Gisinger, C., Libert, L., Marinkovic, P., Krieger, L., Larsen, Y., Valentino, A., Breit, H., Balss, U., Suchandt, S., Nagler, T., Eineder, M., Miranda, N.: The Extended Timing Annotation Dataset for Sentinel-1 - Product Description and First Evaluation Results. IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, 2022. doi: 10.1109/TGRS.2022.3194216 [2] ESA: Sentinel-1 Extended Timing Annotation Dataset (ETAD). Data product website on Sentinel-1 webpage: https://sentiwiki.copernicus.eu/web/s1-products [3] Yu, C., Penna, N. T., Li, Z., “Ocean tide loading effects on InSAR observations over wide regions,” in Geophysical Research Letters, 47, 2020. Doi: 10.1029/2020GL088184 [4] Navarro, V., Gisinger, C., Brcic, R., Suchandt, S., Krieger, L., Fritz, T., Valentino, A., Pinheiro, M., "Advancing Sentinel-1 Insar Applications Using Esa’s Extended Timing Annotation Dataset Product," 2023 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Pasadena, CA, USA, 2023, pp. 7878-7881, doi: 10.1109/IGARSS52108.2023.10282172.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SAME-AT - SAR meets Atmosphere: An Austrian Initiative in coupling INSAR information and numerical weather models

Authors: Michael Avian, Karlheinz Gutjahr, Florian Meier, Clemens Wastl, Stefan Schlaffer, Matthias Schlögl, Christoph Wittmann
Affiliations: Geosphere Austria, Joanneum Research
Satellite based radar systems (Synthetic Aperture Radar - SAR) are well known for all-day and all-weather capabilities. However, as signals have to travel through the atmosphere twice, multiple effects occur such as e.g. range delays and interferometric phase delays. These effects have to be considered when interpreting results based on radar data. The project SAME-AT was designed to contribute to a better understanding of the interaction between radar signals and the atmosphere. A major goal of this Austrian initiative is the improvement of modelling of atmospheric correction as well as the usage error budgets on the atmospheric input parameters. This particular information is derived from forecast uncertainties from the prediction of a convection permitting numerical weather (NWP) ensemble system. Numerical weather models provide valuable information for SAR/InSAR correction approaches. Vice versa, observed SAR/InSAR delays and their error statistics can serve as data sources for the determination of the initial state (data assimilation) of the NWP ensemble systems. SAR/InSAR delays allow conclusions to be drawn about the tropospheric moisture content, which act as extremely valuable information for weather models. An important part of SAME-AT is therefore the investigation of the possible benefit of SAR/InSAR delays on the quality of NWP systems. Six months of bias corrected Sentinel-1 SAR/InSAR delays of ASC 146 were assimilated into the convection permitting Numerical Weather Prediction model AROME over Austria using a slant-delay GNSS operator and a temporal coherence threshold of 0.5 for quality check. The bias showed very high temporal fluctuations, therefore a spatial averaged bias correction was applied. The observation error was set to 1.4 cm slightly inflated compared to typical GNSS delays in order to take observation error correlation into account. Furthermore, a 12 observation point thinning was applied. Results show that small scale increments could be ingested. Mostly neutral forecast RMSE scores for 2m temperature and humidity, mean sea level pressure, 10m wind, global radiation and precipitation were detected. 2m relative humidity bias improved slightly in the first nine forecast hours while other biases remained mostly unchanged. However, in single convective cases like 30th June 2023 an improvement of precipitation patterns compared to the reference assimilation could be shown - due to evaluation with fraction skill score. Especially for higher precipitation thresholds, the InSAR assimilation performed better than the reference. The Austrian NWP models AROME and C-LAEF are a big step in terms of spatial resolution compared to ECMWF models like ERA-5. The effective spatial resolution (i.e. two times the grid point distance) of AROME/C-LEAF is ~ 5 km whereas ERA-5 resolution is about 62 km. Thus, the spatial resolution of AROME/C-LEAF NWPs is in the range of the spatial filter of classical atmospheric phase screen APS removal. However, this resolution is still a few magnitudes larger than the Sentinel-1 resolution and small features in Sentinel-1 interferograms are not modelled by AROME or any member of C-LAEF. To highlight this fact, the delay corrections were calculated ignoring the actual topography. Although AROME corrections contained much more details than ERA-5 corrections, small variations in the interferometric delay were not included in this correction. The meteorological parameters (e.g. temperature, pressure and humidity) are regularly provided on hourly bases. However, Sentinel-1 acquisitions for Austria are approximately 10 min before or after a full hour and the meteorological parameters have to be interpolated in time. To investigate the effect of different temporal interpolation methods, for the date 2023-02-18 we provided a dataset with regular runs at 16:00 and 17:00 and intermediate model runs at 16:15, 16:30 and 16:45. Subsequently, we compared three interpolation methods (nearest neighbour, linear and based on wind components) for the 16:15 run. In our test, the temporal linear interpolation outperformed the other interpolation methods and ensured an interpolation error below +/-5 mm in most areas. Of course, the absolute deviation from the true values strongly depends on the actual weather conditions and may deviate from this example. However, linear interpolation still will yield the best results with respect to the other interpolation methods. The main challenge with interferometric SAR analysis stays the problem of coherence loss mainly due to temporal decorrelation. Still, all recently published work dealing with atmospheric SAR range/phase delay corrections, require a more or less fully coherent signal to perform best. However, for our Austrian test site and the investigated seasons, we detected that this requirement is often not fulfilled. However, the possibilities of the new high agile SAR sensor systems like e.g. ICEYE or the upcoming EE10 mission Harmony allow short or even zero temporal baselines, which strongly mitigate these coherence problems and consequently will yield better estimation of atmospheric effects.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact Of 25-th Solar Cycle Ionospheric Activity On Sentinel-1 SAR Data – A Status Report By SAR-MPC

Authors: Christoph Gisinger, Giorgio Gomba, Mainul Hoque, Victor Navarro Sanchez, Antonio Valentino, Muriel Pinheiro, Guillaume Hajduch, Ruben Bleriot
Affiliations: German Aerospace Center (DLR), Rhea for ESA, European Space Agency (ESA), ESRIN, Collecte Localisation Satellites (CLS), Apside
The ionization of Earth’s upper atmosphere by solar radiation and particles of the solar wind is a known major source of data disturbance with Synthetic Aperture Radar (SAR) satellites, typically operating in the micro-wave regime between 1.2 GHz (L-band) and 9.5 GHz (X-band). Primarily driven by the approximately 11-years solar cycle, the impact of ionosphere dynamics on SAR satellites spans from minor degradation of precise orbit solutions to frequency-dependent path delays in the radar measurements, causing significant errors when geolocating SAR image data and in interferometric SAR processing [1][2]. The European Copernicus mission Sentinel-1 (S-1) operates a C-band SAR payload (5.4 GHz) that provides continuous mapping of Earth’s surface at a global scale with a free and open data policy. At the start of S-1 public data dissemination in late 2014 , solar activity was already rapidly declining and after achieving full data capacity with the addition of second satellite S-1B in mid-2016, the mission has operated mostly in low to moderate ionospheric conditions. However, with the onset of latest solar cycle 25 in 2022, the situation started to change again. This was also registered at the S-1 SAR Mission Performance Cluster (SAR-MPC), an international consortium of experts that performs a continuous monitoring of the mission’s instrument performance and SAR data quality. Specifically, our assessment of S-1 data geolocation with globally distributed test sites began to show low cm-level systematic effects which were attributed to limitations in the presently applied methods using Total Electron Content (TEC) maps from GNSS data to correct for the ionospheric path delays [3]. Moreover, S-1 data usage is supported by the Extended Timing Annotation Dataset (ETAD), which provides several layers for geometric data correction that also include ionospheric path delay estimations based on the TEC maps [4]. Since July 2023, the ETAD is operationally produced for every S-1 acquisition with the exception of wave mode data. By monitoring the statistics of ionospheric delay correction results, SAR-MPC can keep track of the impact of solar activity on S-1 data. As of today, the largest ionospheric path delays recorded with S-1 correspond to more than 2m and were detected in the ETAD results of September 2024 during a series of major solar eruptions. These results mark a strong contrast to the solar quiet years wherein the delays reached about 0.5m maximum. One important driver in computing ionospheric path delays for S-1 mission is the bottom-side ratio that accounts for the fact that S-1 satellites are operating within the ionosphere, requiring a vertical separation of the total integrated TEC contained in the TEC maps. Presently we are using a fixed ratio of 0.9 [4]. Modifications to this ratio and other modelling aspects such as the slant-range mapping methods were investigated in the S-1 ETAD scientific evolution study, employing the 3-D ionospheric model NEDM2020 [5]. Interestingly, our tests with a spatio-temporal modelling of the bottom-side ratio have only showed minor improvements with S-1 measurements at the calibration sites. SAR-MPC is now investigating how to better align these findings on ionospheric delay correction methods with the S-1 measurements of the calibration sites and the statistics provided through the systematic ETAD production. In this contribution, we will present the status of our work, closely following the activity of solar cycle 25 in the data of S-1 which is expected to remain high until 2026. Acknowledgements The S1-ETAD scientific evolution study, contract No. 4000126567/19/I-BG, was financed by the Copernicus Programme of the European Union implemented by ESA. Part of the results presented here are outcome of the ESA contract Sentinel-1 / SAR Mission Performance Cluster Service 4000135998/21/I BG. Copernicus Sentinel-1 mission is funded by the EU and ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. [1] Hackel, S. and Montenbruck, O. and Steigenberger, P. and Balss, U. and Gisinger, C. and Eineder, M. (2016) Model improvements and validation of TerraSAR-X precise orbit determination. Journal of Geodesy, 91 (5), pp. 547-562. Springer. doi: 10.1007/s00190-016-0982-x. ISSN 0949-7714. [2] Gomba, G. and De Zan, F. and Rommen, B. and Orus Perez, R. (2022) Study on Ionospheric Effects on SAR and their Statistics. Proceedings of the European Conference on Synthetic Aperture Radar, EUSAR, pp. 1-5. EUSAR 2022, 2022-07-26 - 2022-07-27, Leipzig. [3] Hajduch et al. (2024) S-1 Annual Performance Report for 2023. Technical report prepared by the S-1 SAR MPC, SAR-MPC-0634, Issue 1.3, 19.04.2024. Online: https://sentiwiki.copernicus.eu/web/document-library [4] Gisinger, C., Libert, L., Marinkovic, P., Krieger, L., Larsen, Y., Valentino, A., Breit, H., Balss, U., Suchandt, S., Nagler, T., Eineder, M., Miranda, N.: The Extended Timing Annotation Dataset for Sentinel-1 - Product Description and First Evaluation Results. IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-22, 2022. doi: 10.1109/TGRS.2022.3194216 [5] Hoque, M., Jakowski, N., Prol, F.: A new climatological electron density model for supporting space weather services. J. Space Weather Space Clim., volume 12, issue 1, 2022. https://doi.org/10.1051/swsc/2021044
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: On the validation and assimilation of Sentinel-1C wave data in operational wave model MFWAM

Authors: Lotfi Aouf, Dr Fabrice Collard, Romain Husson, Bertrand
Affiliations: Météo France, CNRM
The coming launch of Sentinel-1C is an excellent news for operational wave forecasting and the improvement of SAR directional wave spectra coverage over the global oceans. The revisit of certain ocean regions should be significantly improved, and thus enhance the use of directional wave observations in operational wave models. This work aims to perform assimilation experiments of wave spectra provided by S-1C, and to assess the quality of these data in comparison with the existing use of S1A and CFOSAT missions. This work is preliminary analysis for the use of these directional wave observations in the MFWAM wave model, which provides integrated wave parameters for the Copernicus Marine Service (CMEMS). The development of data quality control procedures is crucial to remove corrupted observations in the assimilation and sea state forecasts. Assimilation experiments have been implemented for a global configuration of the MFWAM model with a resolution of 20 km. Model runs with assimilation of S1C SAR wave spectra only, as well as conjointly with the directional wave spectra from S1A and CFOSAT missions, will be analyzed to estimate the impact several scales of dominant sea state in terms of swell and wind sea wave regimes. The output from the experiments will be validated using significant wave height from altimeters and wave parameters from drifting buoys available over all oceans. Particular attention in the analysis will be considered at the Southern Ocean, with seas dominated by severe storms. Discussions and conclusions on the use of directional wave spectra in the model MFWAM will be reported to mission performance center.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Refining Sentinel-1 Radiometric and Pointing Calibration by On-Board Temperature Compensation Emulation

Authors: Beatrice Mai, Andrea Recchia, Gilles Guitton, Harald Johnsen, Muriel Pinheiro, Antonio Valentino
Affiliations: Aresys Srl, OceanDataLab, NORCE, ESA, Starion
The Sentinel-1 (S-1) instrument is an active phased array antenna providing fast scanning in elevation and in azimuth allowing the implementation of the TopSAR acquisition mode, the main operational mode of the S-1 mission over land and ice. The SAR antenna Front End is made of 280 Transmit/Receive Modules (TRMs) organized in 14 tiles (along the azimuth direction) of 20 TRMs each (along the elevation direction). Independent TRMs for polarization (H and V) and for Tx and Rx are available. Each TRM can be commanded in gain and phase to get the needed antenna pattern steering. On-board, due to limited memory available, a fixed set of steering coefficients is loaded: • The steering coefficients to implement 16 Elevation Antenna Patterns (EAP): 6 Stripmap, 3 TopSAR Interferometric Wide Swath (IWS), 5 TopSAR Extra Wide Swath (EWS) and 2 Wave beams. • The steering coefficients to implement 1024 Azimuth Antenna Patterns (AAP): a different sub-set of the azimuth patterns is used for each TopSAR beam depending on the required steering capability while the azimuth pattern pointing at the boresight is always used for Stripmap and Wave beams. The status of the TRMs is continuously monitored by the SAR Mission Performance Cluster (MPC) by means of the dedicated Radio Frequency Characterization (RFC) acquisition mode. The RFC mode allows monitoring the status of each TRM in TX and RX and in H and V polarization. In particular, the gain and phase deviations w.r.t. the nominal TRM operating state (defined in orbit immediately after the launch) are measured with this mode. The monitoring allowed to detect in the past the failure of a few TRMs. During the antenna operation, the TRMs gain and phase change around the commanded settings due to the temperature variations [1]. To reduce the temperature related variations and ensure a better antenna patterns stability, an on-board temperature compensation strategy has been implemented. The gain and phase variations within the TRMs are compensated by look up tables, based on the temperature of the TRM. The on-board temperature compensation for imaging operation can be enabled or disabled. The operational approach is continuous on-board temperature compensation of the TRMs, ensuring that the patterns remain of good quality even if there are high temperature gradients across the antenna. Considering this, the ground processing calculates the antenna patterns based on the nominal commanded settings. A first verification of the effects of the on-board temperature compensation was made by observing the evolution of the gain and phase of all TRMs (Tx/Rx and both V/H pol) derived from a sequence of RFC acquisitions of 24 hours (sampling 5 minutes) during S-1B commissioning phase, while the instrument was cooling down. Two different types of jumps are identified: • Isolated jumps of single TRMs due to the temperature compensation applied at TRM level. • Simultaneous jumps of all the 20 TRMs of the same tile due to the temperature compensation applied at Tile Amplifier (TA) level. From this first investigation, some important conclusions are drawn: • The implemented temperature compensation strategy works well ensuring stable excitation coefficients in presence of large temperature variations (the temperature decrease during the monitored 24 hours was about 30 degrees, much higher than the temperatures variations observed during the operation). Indeed, the excitation coefficients at the begin and at the end of the cooling are aligned. • In the short term the temperature compensation strategy introduces quantized gain and phase jumps. This can result in small distortions of the antenna patterns when a TRM (or even worse a TA) is working around a temperature for which a gain/phase adjustment is foreseen. Moreover, the analysis of real S-1 data has shown that in some cases: • Small radiometric jumps at sub-swaths boundaries, that can be a problem for radiometry-based applications (e.g., soil moisture retrieval or wind velocity estimation). Such small jumps could be introduced by changes in the excitation coefficients of the TRMs due to the temperature compensation that are not compensated on-ground. • Small Doppler Centroid jumps are observed during long data takes that introduce biases in the L2 Radial Velocity (RVL) products [2]. These products provide a measure of the ocean currents based on the DC estimated from the data (after removing the component related to the acquisition geometry). Again, these jumps could be introduced by changes in the excitation coefficients of the TRMs due to the temperature compensation slightly changing the azimuth pointing of the beam. A procedure has been implemented to emulate the temperature compensation approach applied on-board and assess its effect on the antenna patterns. The aim is to confirm whether the above-mentioned effects could indeed be related to the on-board temperature compensation strategy. The following steps are repeated for a certain Sentinel-1 data take: • The time instants when the temperature compensation is applied on-board are derived from the stream of Instrument Source Packets (ISPs) of the acquisition. For this purpose, the L0 Annotation (L0A) products are used since they do not include the User Data Fields and cover the full data take. The time instants to be considered depend on the acquisition mode and on the platform. • The telemetry data containing the temperature of all the TRMs (provided with a sampling of 16 seconds) are interpolated to get the TRMs temperature at the time instants when temperature compensation is applied. • For each TRM, the gain and phase variations during the data take are emulated considering the obtained temperatures and the on-board tables containing the predefined gain and phase settings. • The obtained relative gain and phase variations are made absolute by means of the first available RFC acquisition performed before the data take (according to the acquisition plan the time interval can span from some minutes to several hours). • The absolute excitation coefficients are fed to the S-1 Antenna Model to predict the expected antenna pattern variations within the data take. This presentation will provide an overview of the newest results obtained with the on-board temperature compensation emulation procedure described above. The results will be aimed at identifying possible “fine” calibration strategies to compensate for the small data quality issues discussed above. This will be particularly relevant for the calibration of S-1C data. Indeed, during the S-1C IOC, a dedicated activity will be performed to test different temperature compensation strategies. This will provide more information, always aimed at solving calibration issues encountered by users of real S-1 data. References: [1] S1-PL-ASD-PL-0001, Sentinel-1 SAR Instrument Cal. and Char. Plan., issue 8.1, 25/02/2016 [2] MPC-0534. Sentinel-1 Doppler and Ocean Radial Velocity (RVL) ATBD, issue 1.6, 10/10/2022 Acknowledgements: The SAR Mission Performance Cluster (MPC) Service is financed by the European Union, through the Copernicus Programme implemented by ESA. Views and opinion expressed are however those of the author(s) only and the European Commission and/or ESA cannot be held responsible for any use which may be made of the information contained therein. The authors wish to thank Francisco Ceba Vega (AIRBUS) and Ignacio Navas Traver (ESTEC) for the support provided in understanding the S-1 on-board compensation strategy approach.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Observing ocean wave spectra from space: complementarity between CFOSAT-SWIM and Sentinel-1 SAR wave mode data

Authors: Charles Peureux, Annabelle OLLIVIER, Romain Husson, Lotfi Aouf, Cédric Tourain, Danièle Hauser
Affiliations: CLS, Météo-France, CNES, LATMOS
Wave spectra carry with them information for a detailed characterization of the ocean surface in their domain of definition: integrated parameters such as Hs and dominant waves, spectral and directional width, Stokes drift, etc... This work compares two types of databases of wave spectra measured from space and acquired by two different types of technology. SWIM is the first ocean wave scatterometer onboard CFOSAT, launched in 2018. With its 3 rotating beams at near -nadir incidence, it allows for the measurement of ocean wave spectra in the domain 30m to 500 m wavelength approximately, at global scale. The SAR instruments onboard Sentinel-1 constellation (A and B) are acquiring images of the global ocean since 2014. Thanks to their wave mode acquisition configuration, ocean wave spectra are measured with global coverage every 100 kms over the open ocean. A set of SWIM spectra colocalized with S1 and WAM are statistically compared. SWIM enables ocean waves characterization down to a few tens of meters length, with regular global coverage, whereas Sentinel-1 is limited to the longest wave lengths. Although noisy, SWIM data complement numerical wave predictionmodels such as WAM, and can be used to characterize quantities not accessible via altimetry: wave field directionality, peak parameters and more. Comparisons are shown between SWIM, Sentinel-1 and MFWAM (The French WAM version) for integrated wave parameters such as significant wave height, peak wavelength or peak direction of partitioned wind-sea and swells. Sentinel-1 data are well complemented by the recent launch of Sentinel 1-C. Analysis shows that SWIM can resolve the wind-sea part in approximately 25% of the sea-states over the global ocean, which is higher than Sentinel-1. Sentinel-1 exhibits better capacities to image long swells (wavelengths longer than 500m) than SWIM does. Improvements in SWIM processing are in the pipe, that could allow extending SWIM measurements validity to longer swells, up to 1200 m wave length.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: First Commissioning Phase Results of the Internal Calibration Concept adapted for Sentinel-1C

Authors: Jakob Giez, Dr Patrick Klenk, Dr Kersten Schmidt, Dr Marco Schwerdt
Affiliations: German Aerospace Center (DLR)
Building on the achievements of Sentinel-1A (S-1A) and Sentinel-1B (S-1B), Sentinel-1C (S-1C) represents the third Sentinel-1 satellite in ESA's Copernicus program. The commissioning of S-1C ensures continued provision of high-resolution C-band synthetic aperture radar (SAR) data for security applications and environmental monitoring, including land subsidence, ice movements, and ocean conditions (e.g., [1]) for the coming years. As with its predecessors, S-1C is equipped with an active phased array C-Band antenna, comprising 280 transmit/receive modules (TRM) for each polarization channel (H and V). These modules control the antenna beam steering in both azimuth and elevation directions. Similar to the instrument of the preceding two satellites, S-1C employs an internal calibration methodology based on the acquisition of different calibration signals, obviating the necessity for a dedicated calibration network. Furthermore, the pulse coded calibration technique (also known as PN gating method [2]) is employed, whereby special RF characterization (RFC) data will be acquired for the purpose of monitoring and ensuring the performance of the whole instrument down to individual TRMs. However, in order to mitigate the impact of spurious signals, which had been observed for S-1A and S-1B, modifications have been made to the antenna hardware in the form of newly developed tile amplifiers. This new architectural approach allows for a reduction from five to three different calibration signals, as well as the addition of new interleaved noise measurements. This results not only in alterations to the mode timelines but especially in a complete re-design of the internal calibration concept and the RFC mode. As was done for Sentinel-1A and Sentinel-1B ([3] and [4]), an independent SAR system calibration of Sentinel-1C is performed by DLR in parallel to the commissioning phase activities executed by ESA. Ground testing of the new internal calibration concept has demonstrated its applicability using simulated data. This presentation will show the effectiveness of the adapted Sentinel-1C internal calibration concept, by presenting first results based on real Sentinel-1C data acquired during the commissioning phase. References: [1] R. Torres, D. Geudtner, S. Lokas, D. Bibby, P. Snoeij, I. N. Traver, F. Ceba Vega, J. Poupaert, and S. Osborne, “Sentinel-1 Satellite Evolution,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, July 2018, pp. 1555–1558. [2] D. Hounam, M. Schwerdt, M. Zink, Active Antenna Module Characterisation by Pseudo-Noise Gating, 25th ESA Antenna Workshop on Satellite Antenna Technology, Noordwijk, Netherlands, 2002. [3] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. J. Döring, M. Zink, and P. Prats-Iraola, “Independent Verification of the Sentinel-1A System Calibration,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 3, pp. 994–1007, 2016. [4] M. Schwerdt, K. Schmidt, N. Tous Ramon, P. Klenk, N. Yague-Martinez, P. Prats-Iraola, M. Zink, and D. Geudtner, “Independent System Calibration of Sentinel-1B,” Remote Sensing, vol. 9, no. 6: 511, 2017.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Copernicus POD Service: Status of Copernicus Sentinel Satellite Orbit Determination

Authors: Carlos Fernandez Martin, Jaime Fernandez Sanchez, Heike Peter, Muriel Pinheiro, Carolina Nogueira-Loddo
Affiliations: GMV Aerospace & Defence, PosiTim UG, ESA/ESRIN, EUMETSAT
The Copernicus Precise Orbit Determination (CPOD) Service is integral to the Ground Segment of the Copernicus Sentinel missions, specifically Sentinel-1, -2, -3, and -6, by providing essential orbit products and auxiliary data files. These resources are crucial for the operational production of scientific core products within ESA and EUMETSAT's Production Services (PS) and are available to external users via the Copernicus Data Space Ecosystem (https://dataspace.copernicus.eu/ ). Since its establishment in April 2014, CPOD has consistently supported the Copernicus program alongside the launch of successive Sentinel satellites. Historically reliant on NAPEOS, a Flight Dynamics and POD SW from ESOC, a significant evolution within CPOD has been the transition to a GMV-owned software suite, FocusPOD. Developed from scratch in 2021 using modern C++ and Python, FocusPOD was adopted in CPOD and declared operational in 2023. This represents a leap forward in processing capabilities and integration thanks to the modern technologies and development paradigms, tailored specifically to meet the unique requirements of Copernicus missions. During this transition, accuracy state-of-the-art standards were kept while notably improving runtime performance. GMV, in collaboration with the CPOD Quality Working Group (QWG), oversees the ongoing evolution of precise orbit determination systems. The CPOD QWG includes esteemed institutions such as AIUB, CNES, DLR, ESOC, JPL/NASA, TU Delft, TU Munich, TU Graz, and GFZ, among others, contributing to quality control, integration, and validation of new algorithms and standards. The CPOD Service boasts state-of-the-art accuracy, achieving 3D RMS consistency below 1 cm with non-time-critical products from QWG centers, and excels in timeliness by generating products in under five minutes to support near-real-time processing, all while keeping undoubted operational robustness. Recent initiatives include analyzing the impact of seasonal geocenter motion modelling through the latest ITRF2020 standards. This involves assessing solutions from QWG centers in Centre of Mass (CoM) and Centre of Network (CoN) realizations, impacting orbit comparisons and combination strategies. We are also enhancing Sentinel-3 short-time critical products via single-receiver ambiguity fixing strategies and updating Sentinel-6 macro-models. This presentation will showcase the current performance of POD products and the impact of recent analyses. Additionally, we will outline future developments in CPOD aimed at continuously improving our products and maintaining our critical support of upcoming Copernicus missions' precision and efficiency.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: 3 Years of Observations of the Corner Reflector Network Graz

Authors: Karlheinz Gutjahr, Michael Avian
Affiliations: Joanneum Research, Geosphere Austria
Corner reflectors (CRs) are artificial passive reflectors of different shape, size and material etc. (e.g. Qin et al. 2013 and Jauvin et al., 2019) which have been used in many (In)SAR related studies. Typically, CRs serve as calibration and reference targets in such studies and thus allow investigations and optimizations of radiometric as well as geometric parameters of the SAR system. With respect to geometric calibration activities, we have to mention the following work for illustrative purposes: Mohr and Madsen, 2001 (ERS), Small et al., 2004 (ENVISAT), Schubert et al., 2008 (TerraSAR-X), Nitti et al., 2015 (CosmoSkymed) and Gisinger et al., 2020 (TerraSAR-X and Sentinel-1). Recently, CRs have been increasingly used for ground motion monitoring applications especially in areas that suffer a lack of coherent natural radar reflections (Strozzi et al., 2013, Jauvin et al., 2019 or Qin et al. 2013 and the national funded project project VIGILANS). Dedicated CR based studies on atmospheric path delays can be found in e.g. Jehle et al., 2008 or in Eineder et al., 2011. Both experiments used CR “valley - mountain-top" constellations to investigate topography induced path delay effects. For this reason, Joanneum Research and Geosphere Austria have joined forces and established a CR network around the city of Graz, Austria in order to shed more light on two key research questions: 1. Modelling of atmospheric path delays 2. Deformation monitoring using CRs In total, four double headed CRs were installed in the surrounding area of Graz (order South to North): (i) two CRs at the airport in Graz Thalerhof (THN and THS), (ii) one at Graz-Lustbühel (LBL), all three at the flat and hilly areas of the Grazer Feld, and (iii) one at the Graz-Schöckel (SKL) plateau at 1442 m.a.s.l.. The special features of this network are that (i) LBL is close to the renowned Satellite Laser Ranging (SLR) Station at the Lustbühel Observatory and (ii) THS was equipped with a fixed shifting device, enabling controlled east/west and up/down movements. For Sentinel-1 - depending on the imaging geometry - a CR can theoretically occur in up to three bursts. As the CRs THN, THS and SKL each occur in one orbit in two consecutive bursts, whereby only the valid range of bursts was counted, there are a total of 15 detections. Additionally, all four CRs could be monitored in one TerraSAR-X stripmap data stack. After (at the time of the abstract) 2.5 years of maintaining and monitoring the CR network we can resume: ALE The absolute localisation errors (ALE) for Sentinel-1 are in a feasible range of -0.32 to +0.70 m in azimuth and -0.18 to +0.14 m in range direction. All these numbers include the corrections as provided by the S1-ETAD product (Sanchez et al., 2023). The authors were part of the S1-ETAD pilot study that was set up by ESA between January and September 2022, which aimed to provide early access to ETAD products to expert users, promoting independent validation and supporting the definition of eventual improvements of the product. However, the measurements of THN in ascending orbit 146 show a higher ALE of about 0.30 m in range direction. To exclude multi-path effects, we conducted a terrestrial laser scanning campaign and measured the distances to possible other reflectors nearby. Although a paved road and a metallic fence is close to THN, due to their distance a multipath effect cannot be fully explained. The ALEs for the TerraSAR-X data stack – although ascending orbit direction too - do not show any deviating behaviour of THN. The ALE in azimuth direction is in the range from -0.01 to 0.02 m and after replacing the standard atmospheric range correction with corrections based on ERA-5 or the AROME NWP model, the ALE in range direction is in the range from 0.04 to 0.10 m. d-InSAR To the best of our knowledge, the shifting device as developed for THS is unique and allows a simple yet very controlled shift of the whole CR in east/west and up/down direction. We simulated several “movements” of the CR, most of which were controlled independently by terrestrial measurements. To evaluate the accuracy of observable surface displacements using differential SAR interferometry (d-InSAR), we computed the differences in line-of-sight (LOS) displacements between the stable corner reflector THN and the “moving” corner reflector THS. The observed d-InSAR LOS displacement differences (ΔLOS THN-THS) were compared with terrestrial measurements of differences in East, North, and Height directions, projected onto the incidence angles of the respective Sentinel-1 orbits. The analysis of around 60 d-InSAR measurements revealed a mean difference of 0.75 mm ± 1.07  mm for ascending orbit 146, −0.46 mm ± 1.31 mm for descending orbit 22, and 0.06 mm ± 1.96  for descending orbit 124. Summary Corner reflectors are essential tools for SAR and InSAR applications, supporting calibration, atmospheric modeling, and ground motion monitoring, especially in areas with limited natural radar reflectivity. To address these challenges, Joanneum Research and Geosphere Austria established a CR network near Graz, Austria, featuring four dual-headed CRs, including one with a novel shifting device for controlled movements. Over 2.5 years, observations from Sentinel-1 and TerraSAR-X demonstrated feasible absolute localization errors and robust displacement monitoring capabilities. Differential SAR interferometry analysis validated line-of-sight displacement measurements with mean differences of 0.75 mm ± 1.07 mm (ascending orbit) and −0.46 mm ± 1.31 mm to 0.06 mm ± 1.96 mm (descending orbits), underscoring the network's value in geophysical research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Roadmap for the next generation of Sentinel-1 Level-2 Ocean Products

Authors: Husson Romain, Amine Benchaabane, Guillaume Hajduch, Pauline Vincent, Charles Peureux, Antoine Grouazel, Alexis Mouche, Frédéric Nouguier, Yngvar Larsen, Anna Fitch, Geir Engen, Fabrice Collard, Gilles Guitton
Affiliations: CLS, IFREMER, NORCE, OceanDataLab
Over the past years, the Level-2 experts from the Sentinel-1 Mission Performance Center (MPC-S1) have gained much experience in understanding the capabilities and limitations to provide users with the most accurate and well qualified S1-derived sea state parameters: sea surface wind vectors, wave spectra and radial velocity. These variables are provided by the S-1 Instrument Processing Facility (IPF) in a single Level-2 product referred to as OCN products (OCeaN). In the IPF, some processing specificities can lead to inconsistencies in the Level-2 OCN products or prevent from easily exploring more advanced and synergetic sea state retrieval methodologies. For instance, wind products are produced from Ground-Detected (GRD) products while wave spectra and radial velocity are produced from Single-Look Complex (SLC) processing. This prevents from using wind-related variables only available in SLC and from easily merging wind and wave retrievals in a combined inversion. Besides, on top of the geophysical parameters already available in the OCN products, state-of-the-art techniques have shown the need to provide new information that can benefit to both current users in better qualifying the existing sea state products and new users like meteorologist/oceanographers in providing new variables (e.g. classification of various atmospheric stability conditions [8]). This is typically the case of SAR texture-based information that can be derived from Deep Neural Networks (DNN) to provide segmentation/classification of sea ice, atmospheric or oceanic processes at stake. Following the guidelines from the SEASAR 2023 workshop, we propose several major evolutions, to prepare the ground for the retrieval of more exhaustive, more accurate and better qualified L2 OCN products. Provide new SAR observables: Co- Cross- Polarization Coherence (CCPC) [3], IMACS parameter (Imaginary MeAn Cross-Spectra) [4], Wind streaks orientation [5], geophysical Doppler shift [6] are typical examples of variables that can be extracted from the SLC products. They can be used in complementary approaches to better constrain the sea state retrieval and avoid using ancillary data such as wind vector from Numerical Weather Prediction (NWP) model. AI methods applied to sea surface SAR observations are very useful to identify phenomena that can degrade the quality of the existing OCN products but are not available in these Level-2. Providing classification/segmentation information in OCN products for mature algorithms or more user-friendly data formats to enable deriving new AI techniques would be very beneficial to foster the development of such promising methods. Investigate new Level-2 processing approaches, that would start from Level-0 or Level-1 SLC products, instead of using Level-1 GRD as inputs. Such approaches are expected to bring more freedom: 1- for signal processing to avoid applying filtering/windowing required by other applications than sea state retrieval and 2- to access extended burst overlap in time/frequency domain to estimate bursts Cross-Spectra over larger regions. Such approaches were successfully tested in initiatives from NORCE with their GDAR OCN libraries and have already successfully proven the concept for swell and radial velocity retrievals. The current procedure used for Sigma0 calibration in co- and cross-polarizations relies on measurements over the rain forest to obtain flat gamma profile and over transponders for absolute calibration over these punctual locations. Other approaches referred to as “geophysical calibration” are based on the overall agreement between NWP models and Sigma0 measurements using available GMFs. Such empirical methods compensate for residual calibration errors such as imperfect Elevation Antenna Pattern (EAP) corrections, which also evolve over time and among S1 products. Validation against in situ measurements show improved performances for retrieving sea surface winds [7]. Building a massive and exhaustive dataset of Sentinel-1 products with well qualified reference datasets from numerical model outputs (e.g. CERRA, future ERA6 for atmospheric models) and in situ measurements (e.g. spotter drifters, moored buoys) would also greatly benefit to the better understanding of SAR observables dependence to sensing and environmental conditions. This would ideally require the availability of the entire Sentinel-1 archive reprocessed with a homogeneous processing. Such massive datasets are key for deriving new sea state retrieval methodologies combining them all, using either analytical or AI techniques. As a complement, conducting inter-comparisons and inter-calibration between various spaceborne measurements can dramatically help addressing issues requiring the largest possible number of SAR observations. This is typically the case for Tropical Cyclone (TC) monitoring which can only be daily monitored by a constellation of SARs maintained by ESA (S1), CSA (RCMs, RS2) and JAXA (ALOS-2). This topic also recalls the need for investigating synergies between C-band and L-band missions, to prepare the future ROSE-L mission with current ALOS-2 and NI-SAR mission to be launched in Q1-2025. Similarly, ensuring the consistency of sea surface winds derived from SAR and Scatterometers is necessary to ensure that they can be used together by downstream users. Direct comparisons between co-located acquisitions with short time lags or indirect co-locations against common in situ references are needed to provide inter-calibrated wind measurements despite their different resolutions and sensing technologies. Finally, dedicated efforts are also required to provide users with a user-friendly confidence level, distributed for each sea state variable. In that purpose, using Bayesian retrieval from a wide set of SAR observables and the residual should help quantifying the variables uncertainty.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: DLR’s Independent Calibration of the Sentinel-1C System – First Results from S1C Commissioning Phase Activities

Authors: Dr Patrick Klenk, Dr Kersten Schmidt, Jakob Giez, Matteo Nannini, Andrea Pullela, Dr. Pau Prats-Iraola, Dr Marco Schwerdt
Affiliations: German Aerospace Center (DLR)
European Space Agency’s (ESA) Sentinel-1C (S1C) is the third satellite of the Sentinel-1 mission. To be launched in December 2024, it will ensure seamless continuity of C-band SAR data for global monitoring of the Earth surface in the framework of the Copernicus program (e.g., [1]). In parallel to the commissioning of S1C by ESA, an independent system calibration is performed by DLR on behalf of ESA. Based on an efficient calibration strategy, this paper details the different activities planned and executed by DLR and presents first calibration results. Due to the stringent performance requirements of Sentinel-1, on behalf of ESA, the DLR SAR Calibration Center already performed a similarly organized independent end-to-end system calibration of S1A in 2014 ([2], [3]) and of S1B in 2016 [4], relying on a separate dedicated suite of in-house analysis tools and the innovative and highly-stable reference ground targets deployed along DLR’s SAR calibration field (e.g., [5]) in Southern Germany. However, S1C is not simply an exact rebuilt of its predecessors S1A/B but has implemented a series of hardware improvements for lessons learnt with the previous missions. These novel aspects and their impact on the calibration strategy will first be briefly introduced in this presentation. Launch of S1C is currently foreseen for early December 2024 with the ensuing commissioning phase activities to be performed between early January and late April 2025. This will therefore allow us to then present and discuss the results achieved by the DLR team during the S1C in-orbit commissioning phase at the symposium. After a general overview of all DLR activities, this presentation will focus on a detailed assessment of all L1-based performance results, such as pointing and antenna model verification, point target evaluations and InSAR verification activities. Last but not least, results of cross-calibration activities between Sentinel-1A and S1C acquisitions will be discussed. References: [1] R. Torres, D. Geudtner, S. Lokas, D. Bibby, P. Snoeij, I. N. Traver, F. Ceba Vega, J. Poupaert, and S. Osborne, “Sentinel-1 Satellite Evolution,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, July 2018, pp. 1555–1558. [2] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. J. Döring, M. Zink, and P. Prats-Iraola, “Independent Verification of the Sentinel-1A System Calibration,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 9, no. 3, pp. 994–1007, 2016. [3] M. Schwerdt, K. Schmidt, N. Tous Ramon, G. Castellanos Alfonzo, B. Doering, M. Zink, and P. Prats, “Independent Verification of the Sentinel-1A System Calibration - First Results,” in EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, June 2014, pp. 1259–1262. [4] M. Schwerdt, K. Schmidt, N. Tous Ramon, P. Klenk, N. Yague-Martinez, P. Prats-Iraola, M. Zink, and D. Geudtner, “Independent System Calibration of Sentinel-1B,” Remote Sensing, vol. 9, no. 6: 511, 2017. [5] M. Jirousek, B. Doering, D. Rudolf, S. Raab, and M. Schwerdt, “Development of the highly accurate DLR Kalibri Transponder,” in EUSAR 2014; 10th European Conference on Synthetic Aperture Radar, June 2014, pp. 1176–1179.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.01.05 - POSTER - Ozone and its precursors through the Atmosphere: Advances in understanding and methods

Ozone is a fundamentally important constitute of the atmosphere, in the troposphere it is a greenhouse gas and pollutant that is detrimental to human health and crop and ecosystem productivity. In the troposphere data is available from ozonesondes, aircraft, and satellites, but high levels of uncertainty biases remain. While in the stratospheric ozone protects the biosphere from UV radiation, long-term observations from satellites and the ground confirmed that the long-term decline of stratospheric ozone was successfully stopped, as a result of the Montreal protocol. Future stratospheric ozone levels depend on changes on many factors including the latitude domain and interactions with the troposphere, and potentially the mesosphere.

This session is detected to presentation of methods and results for furthering the understanding of the distribution of ozone and its precursors through the atmosphere through remote sensing techniques, with particular emphasis on advanced methods with past and current missions such as OMI and Sentinel-5P, and preparing for future missions such as ALTIUS, Sentinels 4 & 5 and their synergies with other missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Posteriori Fusion of IASI, MIPAS and GOME2 Ozone Profile Products

Authors: Nicola Zoppetti, Liliana Guidetti, Dr Simone Ceccherini, Piera Raspollini, Ugo Cortesi
Affiliations: Ifac-cnr
In this work, we introduce a new dataset of atmospheric ozone profiles derived from the synergy of three satellite instruments: IASI, GOME-2, and MIPAS. This dataset is global in scope, spanning the period from January 2008 to April 2012, and is mapped onto a regular time-latitude-longitude grid. While the grid is not fully covered due to the inherent characteristics of the contributing instruments, the dataset provides comprehensive spatial and temporal coverage of the atmospheric ozone distribution. The dataset is constructed using the Complete Data Fusion (CDF) method, an algebraic algorithm rooted in the Optimal Estimation (OE) technique. This method integrates individual OE retrievals from the three instruments, leveraging their complementary strengths to enhance the accuracy and completeness of the resulting profiles. By combining the high vertical resolution of MIPAS (Envisat, IFAC-CNR data), the high spatial coverage of IASI (Metop-A, ULB-LATMOS data) and the ultraviolet sensitivity of GOME-2,(Metop-A, ACSAF data) the CDF approach delivers a more robust and detailed representation of atmospheric ozone. We describe the genesis of this dataset, focusing on its unique characteristics from both the perspective of individual profiles and aggregated large-scale patterns. The dataset is evaluated against several reference sources, including the original retrievals, radiosonde measurements, and atmospheric models, depending on the context of the analysis. A key aspect of our work is a detailed exploration of the methodological contributions of each instrument to the fused product, emphasizing the added value brought by the CDF approach. Preliminary validation of the fused dataset involves comparisons with ozone radiosonde profiles, offering insights into its accuracy and reliability. Additionally, the gridded structure facilitates direct comparisons with global atmospheric models. Examples of such comparisons are presented, showcasing the potential applications of this dataset in advancing our understanding of atmospheric dynamics. Finally, we discuss the road map for publishing this dataset within the framework of a digital infrastructure currently under development. This infrastructure aims to ensure the dataset's accessibility, usability, and integration with existing atmospheric and climate research tools, thereby supporting its future use in a wide range of scientific studies and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Development of a Merged CO Climate Data Record from IASI and MOPITT Observations

Authors: Maya GEORGE, Cathy Clerbaux, Juliette Hadji-Lazaro, Sarah Safieddine, Simon Whitburn, Selviga Sinnathamby, Daniel Hurtmans, Pierre Coheur, Helen Worden, Corinne Vigouroux, Bavo Langerock, Steven Compernolle
Affiliations: LATMOS/IPSL, Sorbonne Université, UVSQ, CNRS, Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing (SQUARES), Université libre de Bruxelles (ULB), Royal Meteorological Institute of Belgium (RMIB), Atmospheric Composition, Measurements and Modelling (ACM2), Atmospheric Chemistry Observations and Modeling, National Center for Atmospheric Research, Royal Belgian Institute for Space Aeronomy (BIRA)
Carbon monoxide (CO) is a key atmospheric compound that can be remotely sensed by satellite on a global scale. Continuous observations have been available since 2000 from the MOPITT/Terra instrument. Since 2007, the IASI/Metop instrument series has provided another homogeneous CO data record, thanks to the recent reprocessing of Metop-A and Metop-B data by EUMETSAT, resulting in the IASI CO Climate Data Record (IASI CO-CDR). Measuring the variability and trends of CO on a global scale is crucial as it serves as a precursor for ozone and carbon dioxide and regulates the troposphere's oxidizing capacity through its destruction cycle involving the hydroxyl radical (OH). As part of the ESA CCI+ Ozone Precursors project, we have been developing a merged CO Climate Data Record dataset combining IASI and MOPITT data to analyze long-term variability and trends. Monthly averaged gridded CO total columns (Level 3, 1°x1° resolution) are used as input. For IASI, we first apply an additional cloud mask to the Level 2 official data available on the Aeris French Database (https://iasi.aeris-data.fr/). We then compute monthly averages using IASI CO data from all Metop satellites, resulting in an intermediate (non-public) IASI CO monthly Level 3 product. For MOPITT, we use the official monthly Level 3 (version 9T) data available on the NASA Earth Data Portal (https://www.earthdata.nasa.gov/). We tested various methodologies for merging IASI and MOPITT CO Level 3 monthly grids. We performed averages with weighting schemes based on MOPITT priors and/or IASI/MOPITT uncertainties. In this poster, we will present the final version of the CO CCI merged product, which uses MOPITT CO total column/MOPITT prior ratios as weights for averaging. Among the different algorithm versions tested, this approach showed the best performance when validated against ground-based FTIR NDACC measurements, achieving a mean absolute bias below 5%, low standard deviation, and excellent correlation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Initial investigations of altitude-resolved ozone variability for the past 2.5 decades using the novel GOME-type Ozone Profile Essential Climate Variable (GOP-ECV) data record

Authors: Dr Melanie Coldewey-Egbers, Dr. Diego Loyola, Richard Siddans, Barry Latter, Brian Kerridge, Michel Van Roozendael, Daan Hubert, Michael Eisinger
Affiliations: German Aerospace Center, Rutherford Appleton Laboratory, Royal Belgian Institute for Space Aeronomy, European Space Agency
In this paper, we present first applications of the novel GOME-type Ozone Profile Essential Climate Variable (GOP-ECV) data record for the 26-year period 1995 through 2021. GOP-ECV has been developed in the framework of the European Space Agency’s Climate Change Initiative+ ozone project (Ozone_cci+) and combines ozone profile measurements from a series of European nadir-viewing satellite sensors including GOME, SCIAMACHY, OMI, GOME-2A, and GOME-2B into a coherent long-term climate data record. The Rutherford Appleton Laboratory (RAL) scheme is used to retrieve ozone profiles on 20 fixed pressure levels ranging from the surface up to 80km. Profiles from the individual instruments are first harmonized through careful elimination of inter-sensor deviations and drifts and then merged to generate a consistent monthly mean gridded product at a spatial resolution of 5°x5°. For the harmonization, OMI serves as a reference sensor. In a further step, the merged product is homogenized with the well-established GTO-ECV (GOME-type Total Ozone Essential Climate Variable). This data record is based on nearly the same satellite sensors and possesses an excellent long-term stability, which enables us to further improve the coherence and reliability of the merged nadir profiles from the first step. An altitude-dependent scaling, based on the profile Jacobians derived from a Machine Learning approach, is applied to the profiles. With this adjustment, full consistency between the GTO-ECV and GOP-ECV data records in terms of the total ozone column is achieved. We use the GOP-ECV data record to investigate the temporal evolution and long-term variability of the partial columns and ozone anomalies from selected atmospheric layers during the past 2.5 decades. The anomalies will be compared with anomalies derived from the SBUV (Solar Backscatter Ultraviolet Radiometer) Merged Ozone Data Set (MOD) from the SBUV satellite instrument series covering the period 1970-2023. On top of that, we show results of an initial comparison with ozonesonde measurements in the tropics gathered from the Southern Hemisphere Additional Ozonesonde (SHADOZ) network archive. We find a low bias in the lowermost layers, which turns into a positive bias above 150 hPa. Furthermore, we demonstrate the impact of the scaling on the temporal evolution of the difference between GOP-ECV and the ground-based data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Unique Contribution to Understanding Antarctic Ozone Hole Dynamics of Infrared Sounder Measurements

Authors: Guido Masiello, Tiziano Maestri, Carmine Serio, Giuliano Liuzzi, Michele Martinazzo, Federico Donat, Lorenzo Cassini, Pamela Pasquariello, Marco D'Emilio, Sara Venafra
Affiliations: Department of Engineering, Univeristy of Basilicata, Department of Physics and Astronomy, University of Bologna, Department of Civil, Building and Environmental Engineering, University of Rome, Italian Space Agency
The ozone hole over Antarctica is a yearly occurrence that forms and grows during the Southern Hemisphere's spring. It typically reaches its maximum size in October or November and then diminishes as temperatures in the Antarctic stratosphere rise in December. This warming prevents the formation of Polar Stratospheric Clouds (PSCs), which are crucial for ozone depletion. PSCs form when temperatures drop below 195 K, allowing nitric acid and water vapor to condense into ice crystals. Various satellite instruments, such as the Ozone Monitoring Instrument (OMI) and the Tropospheric Monitoring Instrument (TROPOMI), track the ozone hole. These instruments rely on reflected sunlight to measure ozone concentrations, limiting their ability to monitor the early stages of the ozone hole when the polar region is still dark. Additionally, they cannot directly detect nitric acid and water vapor in the gas phase. Microwave instruments like MLS/AURA can monitor nitric acid but have coarse spatial resolution and are insensitive to the thermodynamic conditions in the upper troposphere and lower stratosphere (UT/LS) region. Recent improvements in forward and inverse modeling techniques have enabled scientists to simultaneously retrieve thermodynamic conditions, ozone, and nitric acid concentrations from Infrared Atmospheric Sounding Interferometer (IASI) measurements. IASI, with its polar orbit, provides excellent spatial and temporal coverage of the ozone hole. By analyzing IASI data collected over Antarctica from 2021 to 2023, we discovered a significantly larger and deeper ozone hole than indicated by ECMWF analysis, which relies on TROPOMI and OMI data that are limited during winter, especially in the Antarctic interior. The study found a correlation between decreasing nitric acid concentrations and upper tropospheric temperatures below 195 Kelvin, supporting the role of NATs in ozone depletion. IASI spectra near the pole confirmed the presence of NATs. Furthermore, a comparison of HNO3 spatial patterns from IASI and MLS/AURA showed strong agreement, indicating that the observed nitric acid decline primarily occurs in the upper troposphere under cold conditions favorable for NAT formation. The study demonstrates how infrared sounder measurements offer valuable insights for understanding Antarctic Ozone hole dynamics indicating a fundamental contribution of EE9-FORUM in this direction.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Characterization of the TROPOMI UV radiometric calibration for the operational Ozone Profile retrieval algorithm

Authors: Serena Di Pede, Dr Pepijn Veefkind, Dr Maarten Sneep, Dr Mark ter Linden, Dr Erwin Loots, Emiel van der Plas, Edward van Amelrooy, Mirna van Hoek, Antje Ludewig, Arno Keppens
Affiliations: Royal Netherlands Meteorological Institute (KNMI), Delft University of Technology, Royal Belgian Institute for Space Aeronomy (BIRA-IASB)
Daily global ozone profile measurements are essential to understand ozone-related physical and chemical processes in the atmosphere. Ozone profile’s information can be derived from the UV backscattered radiation as the ozone absorption cross section varies more than three orders of magnitude. But in order to retrieve accurate information on the trace gas, especially in the troposphere, the quality and calibration of the measured radiances is crucial. The operational TROPOMI Ozone Profile retrieval is obtained from the TROPOMI radiances in the UV band 1 (270-300nm) and band 2 (300-330nm), with a spectral resolution of, respectively, 1.0 nm and 0.5 nm, and a spectral sampling of 0.065nm. To optimize the retrieval and improve the fitting precision, it is a common practice to apply an additional calibration correction on the input radiances. This radiometric calibration, known as “soft-calibration”, is performed on the input radiances at L2 processing level, before performing the retrieval itself. The TROPOMI soft-calibration is a time-dependent correction, updated yearly on the input radiances. It is computed from the comparison of the measured radiances with forward model calculations, taking into account four orbits per each year in order to capture the seasonal radiance variation. Per each orbit, the correction parameters are computed as a function of the wavelength, orbit ground pixel, and radiance level. The current radiometric soft-calibration correction can reach up to ~30% of the input radiance (in relative sense), especially at wavelengths < 300 nm, and it shows a unique spectral shape. In order to decrease the size of the correction, and to improve upon its spectral shape, the effect of the detector straylight has been thoroughly investigated and will be presented. In this contribution, we will first focus on the overview of the current TROPOMI operational radiometric correction, then we will deepen into the complex effect that detector straylight has on the size and trend of the soft-calibration. In particular, we will discuss on the importance to keep the correction as much stable and small as possible in time, which is essential for the temporal consistency of the retrieval quality and to eliminate residual systematic biases in the radiance that can significantly affect the precision of the tropospheric ozone column estimate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Observation of chlorine activation by means of TROPOMI measurements of OClO from 2017 – 2025

Authors: Janis Pukite, Steffen Ziegler, Thomas Wagner
Affiliations: Max Planck Institute for Chemistry
Chlorine dioxide (OClO) is a by-product of the ozone depleting halogen chemistry in the stratosphere. Although being rapidly photolysed during daytime, it plays an important role as an indicator of the chlorine activation in polar regions during polar winter and spring at twilight conditions because of the nearly linear dependence of its formation on chlorine oxide (ClO). The TROPOspheric Monitoring Instrument (TROPOMI) is an UV-VIS-NIR-SWIR instrument on board the Sentinel-5P satellite developed for monitoring the composition of the Earth’s atmosphere. Launched on 13 October 2017 in a near polar orbit, it provides continuous monitoring possibilities for many constituents including the observations of OClO at an unprecedented spatial resolution. We analyze the time series (2017 – 2025) of slant column densities (SCDs) of chlorine dioxide (OClO) at polar regions. Especially we focus on the higly variable conditions in the NH polar regions by comparing the OClO timeseries with meteorological data and CALIPSO CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) polar stratospheric cloud (PSC) observations for both Antarctic and Arctic regions. This allows us to investigate the conditions under which the chlorine activation starts and ends.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Tropospheric Ozone Retrieval Using the RAL UV Algorithm: Applications to Geostationary and Polar-Orbiting Satellites with Early Insights from GEMS and TEMPO

Authors: Ka Lok Chan, Richard Siddans, Brian Kerridge, Barry Latter
Affiliations: RAL Space
The ozone profile retrieval algorithm developed for UV nadir sounders by RAL is a robust and versatile scheme for extracting height-resolved ozone distributions from spectral observations in the ultraviolet (UV) band. is applicable also to preceding nadir-viewing sensors aboard both polar-orbiting satellites (e.g., GOME, GOME-2, OMI and Sentinel-5P ) and newly available geostationary satellites (e.g., GEMS and TEMPO). Using the optimal estimation method, the scheme provides information on tropospheric ozone (surface to 450 hPa) in particular as well as higher layers and data have been exploited in a series of scientific studies concerning the role of ozone in atmospheric chemistry, climate, and air quality. Having been re-engineered for ESA Sentinels-4 and -5, the scheme has recently undergone significant enhancements, enabling harmonized application across multiple satellite platforms with differing orbits and observational characteristics. These advancements have improved the algorithm's precision, consistency, and computational efficiency, ensuring its adaptability to both polar and geostationary instruments. A key focus has been the optimization of the retrieval algorithm for geostationary satellite instruments, such as GEMS, TEMPO and Sentinel-4, which provide unprecedented temporal resolution and coverage for monitoring ozone variability over specific regions. This presentation will provide an overview of the current state of tropospheric ozone data retrieved from polar-orbiting instruments, highlighting its validation against ozone sonde measurements and illustrating its utility in various applications. Early results from the geostationary instruments GEMS & TEMPO will be illustrated, emphasizing their capabilities in capturing diurnal ozone variation. These results will be compared against data from polar-orbiting instruments and ozone sonde observations to assess consistency and reliability. By harmonizing ozone profile retrievals across satellite platforms and leveraging the unique advantages of geostationary sensors, the RAL algorithm represents a significant step forward in atmospheric monitoring. These advancements pave the way for a more comprehensive understanding of tropospheric ozone at regional and global scales, offering complementary information for air quality management and climate research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Antarctic stratospheric nitrogen hole: Southern Hemisphere and Antarctic springtime total nitrogen dioxide and total ozone variability as observed by Sentinel-5p TROPOMI and the stratospheric denitrification process.

Authors: Jos deLaat
Affiliations: KNMI
Daily Sentinel-5p nitrogen dioxide total column measurements - in conjunction with total ozone column data - are used to study daily, seasonal and interannual Southern Hemisphere middle latitude and polar (Antarctic) spatio-temporal variability from 2018 to 2021 during Austral spring with a particular focus on the Antarctic Ozone Hole and the stratospheric denitrification process. Correlating total nitrogen dioxide columns and total ozone columns using phase diagrams intricate patterns. Although denitrification is a crucial process for the formation of the Ozone Hole, the relation between total ozone and total nitrogen dioxide is far from simple. Results reveal two main regimes: inner-vortex air depleted of ozone and nitrogen dioxide and outer-vortex air enhanced in ozone and nitrogen dioxide. Within the vortex total ozone and total stratospheric nitrogen dioxide are strongly correlated which is much less evident outside the vortex. Denitrification inside of the Antarctic ozone hole (stratospheric vortex) during Austral spring can clearly be observed. These two main regimes are in phase diagrams linked via a third regime, so-called “mixing lines”, coherent patterns in the total nitrogen dioxide column - total ozone column phase space connecting the two main regimes. These “mixing lines” exist because of spatial differences in the locations of minimum and maximum nitrogen dioxide and total ozone and differences in their respective spatial gradients. This strongly suggests that total nitrogen dioxide columns and total ozone columns reflect coherent physio-chemical processes occurring at different altitudes thereby providing information about vortex dynamics and cross-vortex edge mixing. The characteristics of the relation between nitrogen dioxide and ozone vary significantly during Austral spring. Interannual variability between 2018-2021, on the other hand, is rather small and for any time of the year phase diagrams are very similar. The sole exception is 2019 which was a year with a highly unstable Antarctic stratospheric vortex and significantly more mixing of inner-vortex and outer-vortex air. The distinction between the three regimes is nevertheless robust irrespective of date and time. The results show that daily stratospheric nitrogen dioxide column measurements from nadir-viewing satellites like TROPOMI – and thus many of its predecessors like OMPS, OMI, GOME-2,, SCIAMACHY and GOME – provide a new means for monitoring stratospheric nitrogen dioxide and the denitrification in the springtime Antarctic stratosphere, and in conjunction with daily total ozone column data, also springtime Antarctic stratospheric vortex dynamics. Finally, although these findings are not new in a sense that they had been reported in the early 2000s based on satellite instruments GOME and SCIAMACHY. However - and to some extent surprisingly - there never had been any effort to explore these findings with additional data and/or in new satellite instruments. This "rediscovery" is rather timely as the capacity of earth observation to monitor the stratosphere is rapidly aging and key satellites like MLS will end by 2026.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Inter-comparison of tropospheric ozone column data sets from combined nadir and limb satellite observations

Authors: Carlo Arosio, Viktoria Sofieva, Andrea Orfanoz-Cheuquelaf, Alexei Rozanov, Klaus-Peter Heue, Edward Malina, Roeland Van Malderen, Jerry Ziemke, Mark Weber
Affiliations: Institute of Environmental Physics, University of Bremen, Finnish Meteorological Institute, German Aerospace Center, DLR, ESA ESRIN, Royal Meteorological Institute of Belgium, NASA GSFC
Satellite observations provide a valuable monitoring tool for tropospheric ozone, particularly after the launch of ESA Sentinel missions. This study is part of the ESA project Ozone Recovery from Merged Observational Data and Model Analysis (OREGANO) and focuses on satellite data sets derived using limb-nadir combined observations. This approach exploits the total ozone column from nadir observations and stratospheric column information from limb measurements (or models) to obtain tropospheric ozone column (TrOC) as a residual. This study contributes to the Tropospheric Ozone Assessment Report (TOAR) II activity. Seven data sets are considered in our analysis: some combine two satellite-based observations, others satellite with model or reanalysis data. At IUP, TrOC data sets were derived using the limb-nadir matching technique from SCIAMACHY and OMPS observations and were merged to obtain a product covering the 2002-2023 time frame. Three more long-term satellite-based products are considered: OMI-LIMB and GTO-LIMB developed at the Finnish Meteorological Institute, and OMI-MLS developed at NASA. Other shorter TrOC products involving model data, such as OMPS-MERRA, EPIC-MERRA and S5P-BASCOE, are included in this study to perform an overall inter-comparison between the existing data sets. We compared the data sets in terms of climatology and seasonality, investigated the tropopause height used for the construction of each data set and related biases, and finally evaluated long-term TrOC trends and drift with respect to ozonesondes. The overall goal of the study is to assess the consistency between the data sets and explore possible strategies to reconcile the differences between them. Despite uncertainties associated with the limb-nadir residual methodology and large biases between the mean values of the considered data sets, we show an overall agreement of TrOC morphology. We demonstrate that the average drift with respect to ground-based observations is close to zero and that long-term trends in specific regions can be consistently detected, for instance, the positive trend of up to 1.5 DU per decade observed over Southeast Asia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Tropospheric Ozone from CCD and CSA: Data extension and harmonization from TROPOMI to SCIAMACHY

Authors: Kai-Uwe Eichmann, Swathi M. Satheesan, Dr. Mark Weber
Affiliations: Institute Of Environmental Physicss
The algorithms of the convective cloud differential method CCD/CHORA (Cloud Height adjusted Ozone Reference Algorithm) and CSA/CHOVA (Cloud Height Ozone Variation Algorithm) are based on the method developed by Ziemke et al. (1998, 2001). They retrieve tropical tropospheric column ozone TCO [DU] and ozone volume mixing ration [ppbv], respectively. This work summarizes the extension of the algorithms from TROPOMI and the GOME-2s to SCIAMACHY, OMI, and GOME and the harmonization of the datasets. The nadir viewing TROPOMI spectrometer aboard the S5p satellite, started in October 2017, has both high spatial resolution and daily coverage of the Earth. More than six years of GODFIT total ozone and OCRA/ROCINN CRB (cloud reflecting boundary) cloud fraction and height operational level 2 data (versions ≥ 2.4) are available that are combined to retrieve tropospheric ozone. The instruments GOME-2 A, B, and C are also providing total ozone and cloud retrieval (version ≥ 4.8) data but on a coarser grid for the period 2007 to 2023. The CHORA algorithm has been optimized for the TROPOMI and GOME2 type instruments. The ACCO (Above Cloud Column Ozone) is calculated in the Pacific sector. In a post-processing step, it is interpolated and smoothed in time/latitude space to reduce data gaps and scatter in the daily ACCO(latitude) 1D fields. The upper tropospheric ozone volume mixing ratios TTO [ppbv] is retrieved using the cloud slicing method CSA/CHOVA (Cloud Height induced Ozone Variation Analysis) by regression analysis of ACCO and CP (Cloud Pressure) pairs. Monthly mean volume mixing ratios are calculated in the Pacific sector to calculate the above cloud column ozone (ACCO) at the pressure level of 270 hPa. Daily total ozone is averaged in a small grid box with a latitude/longitude resolution of 0.5° x 0.5° (1° x 1° for GOME2) to minimize errors from stratospheric ozone spatial variation. All datasets have been successfully validated using SHADOZ ozone sonde profiles. Low CHORA biases (TROPOMI ~11%, GOME2s < 6%) and a dispersion of ~6 DU are found. CHOVA/TROPOMI bias is about -4% with 11 ppbv dispersion. Temporal sampling of TROPOMI data is one day due to the high amount of daily measurements and three day of GOME-2 s. The data retrieval for SCIAMACHY, OMI, and GOME is currently work in progress. Here we present results of the time evolution of tropospheric ozone for the 4+ sensors and on the harmonization of the datasets for both retrieval methods. Part of this work was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) via the TROPO3-MIDLAT project. The work on TROPOMI/S5P geophysical products is funded by ESA and national contributions from the Netherlands, Germany, Belgium, and Finland. We thank the NASA/GSFC SHADOZ team for providing ozone sonde data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Harmonized Tropospheric Ozone Data Records From Satellites Produced for the Second Tropospheric Ozone Assessment Report: Methodology and Outcomes

Authors: Arno Keppens, Daan Hubert, Oindrila Nath, José Granville, Jean-Christopher Lambert
Affiliations: Royal Belgian Institute for Space Aeronomy
The first Tropospheric Ozone Assessment Report (TOAR) encountered several observational challenges that limited the confidence in estimates of the burden, short-term variability, and long-term changes of ozone in the free troposphere. One of these challenges is the difficulty to interpret tropospheric observations from space, especially when combining data records from multiple satellites with differences in vertical sensitivity, prior information, resolution and spatial domain. Additional confounding factors are time-varying biases and the lack of harmonization of geophysical quantities, units, and definitions of the tropospheric top level. All together, these factors reduced the confidence in the observed distributions and trends of tropospheric ozone, impeding firm assessments relevant for policy and science. These challenges motivated the Committee on Earth Observation Satellites (CEOS) to foster a coordinated response on improving assessments of tropospheric ozone measured from space. Here, we report on work and resulting harmonized datasets that contribute to this CEOS activity, as well as to the ongoing second phase of the TOAR assessment. Our primary objective is to harmonize the vertical perspective of different ozone data records from satellites, using the Copernicus Atmosphere Monitoring Service Re-Analysis (CAMSRA) as a transfer standard. A first class of products is obtained through an inversion of spectral measurements by nadir-viewing sounders into a vertical ozone profile. We illustrate several approaches to harmonize the differing profile retrievals for the GOME-2, IASI, OMI and TROPOMI sensors, by making use of prior information and vertical averaging kernels. A second class of tropospheric ozone products is obtained through subtraction of the stratospheric component from total column retrievals. We present how all products, from both classes, can be harmonized to a common tropospheric top level. The effect of all harmonization approaches on tropospheric ozone assessments, both in terms of global distributions and long-term changes, is discussed. We additionally anchor the satellite records to monthly gridded ozonesonde data obtained from the TOAR HEGIFTOM (Harmonization and Evaluation of Ground-based Instruments for Free Tropospheric Ozone Measurements) working group, both before and after harmonization. This provides a view on whether the tropospheric ozone column harmonization yields a better agreement with reference data as well. The presented harmonization methodology is currently under review for the Tropospheric Ozone Assessment Report Phase II (TOAR-II) Community Special Issue (ACP/AMT/BG/GMD inter-journal SI, https://acp.copernicus.org/articles/special_issue1256.html). The harmonized datasets are planned to become available for download through the Belgian BRAIN-be 2.0 TAPIOWCA project (long-Term Assessment, Proxies and Indicators of Ozone and Water vapour changes affecting Climate and Air quality, https://tapiowca.aeronomie.be/).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: D.01.03 - POSTER - Synergies between ESA DTE Programme and DestinE Ecosystem

The content of this session shows the potential of dynamic collaborations between ESA DTE Programme activities and the opportunities provided by DestinE Platform. The session includes presentations about the capabilities available in DestinE Platform, and the defined framework to grow the ecosystem of services through onboarding opportunities for ESA and non-ESA activities. It also includes presentations on the pre-operational innovative services and applications developed under ESA DTE activities (such as the Digital Twin Components) and their synergies with DestinE Platform.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: DestinE Platform – Collaborative Endpoint for AI Tenancies

Authors: Sebastien Tetaud
Affiliations: ESA
This paper introduces a collaborative, cloud-based environment designed to support the efficient management and utilization of Earth Observation (EO) and DestinE Digital Twins data. The environment offers a work space that deploy tailored virtual machines with necessary computational resources (CPU/GPU), facilitating advanced data processing, modeling, and AI-driven analytics. It includes a private Model and Dataset Registry, enabling seamless access to and sharing of datasets and AI models, along with a DestinE Python library for easy integration with the platform’s services. The environment also offers features like AI-focused educational content and a community space to promote collaboration, and best practices within the EO domain. By empowering users with state-of-the-art tools, this platform fosters innovation in Earth System Modeling and enhances the application of AI in EO research and operations. The presentation discusses the environment, its integration capabilities, and its role in enabling secure and efficient collaboration across various users.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: DestinE Sea Ice Decision Enhancement (DESIDE): A Destination Earth Use Case

Authors: David Arthurs
Affiliations: Polar View
Ships operating in the polar regions encounter hazards that present elevated levels of risk, and more severe consequences when accidents occur. The DESIDE project is utilizing Destination Earth system capabilities and data to provide comprehensive sea ice and related information for policy and operational decision makers in the polar regions. Benefits to polar operations and society include: 1. More accurate information that supports strategic and tactical decision-making for enhanced safety of life and property, 2. More efficient route optimization that minimizes ship emissions for pollution reduction, and 3. Better forecasts that help policymakers protect environmentally sensitive areas affected by changing polar conditions. The DESIDE project is: - Aggregating diverse information sources to provide common products across jurisdictional boundaries. - Producing new forecast products to improve decision-making by users. - Customizing delivery of products to different user communities based on their needs. The drivers for the project are: - Regulatory Compliance: Delivering short and medium-term forecasts of ice, meteorological, and ocean conditions, meeting the requirements of the IMO Polar Code. - Climate Change Effects: Providing long-term forecasts on changing ice and other conditions, enabling planning and policy development for the fishing, tourism, research, and oil and gas industries. DESIDE is demonstrating the added value of the DestinE system in supporting policy and decision making at three levels within the context of polar operations: - Execution support: Supporting ships needing to avoid or navigate through sea ice. - Planning support: Supporting ship operators in planning polar voyages, guided by the information requirements of the IMO Polar Code. - Strategy and policy support: Supporting organizations and policy analysts wanting to assess the impact of climate change on future decisions regarding polar operations. Workflow - Data Ingestion: Collect past, current, and forecasted information on sea ice, snow thickness, icebergs, ocean currents and waves, wind, temperature, visibility, and Sentinel-1 imagery from DESP/DestinE. - Data Processing, Modeling, and Analysis: Use models, machine learning, and algorithms to process data for different user communities. - Information Dissemination: Through decision support platforms. - Information Product Generation: Create short, medium, and long-term sea ice charts, risk profiles, and route optimization suggestions for better decision-making. Decision support is provided in three ways to meet different needs and levels of sophistication of user groups: - IcySea: Tactical decision support for ships operating in polar regions. - Polar TEP: Research collaboration platform for private, academic, and public sectors. - Polar Dashboard: Strategic decision support for policy analysts and residents. The DESIDE team consists of: - Polar View - EOX - Drift+Noise Polar Services - Norwegian Meteorological Institute - Finnish Meteorological Institute - Danish Meteorological Institute
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Destination Renewable Energy: Renewable Energy Forecasting on DestinE platform using Digital Twin data

Authors: Rizos-Theodoros Chadoulis, Charalampos Kontoes, Theodora Papadopoulou, Stelios Kazadzis, George Koutalieris, Christos Stathopoulos, Platon Patlakas, Angelos Georgakis, Kyriakoula Papachristopoulou, Thanassis Drivas, Nikolaos S. Bartsotas, Symeon Symeonidis, Vasileios Perifanis, Athanasios Koumparos, David Casalieri, Vasileios Sinnis
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), BEYOND Center of Earth Observation Research and Satellite Remote Sensing, ENORA INNOVATION, Weather & Marine Engineering Technologies P.C., Quest Energy, PMOD-WRC
Renewable energy systems like solar and wind inherently depend on weather and climate conditions. As the world confronts climate change and the imperative to reduce greenhouse gas emissions, accurate forecasting, standardized forecasting models and protocols, and the transferability of these models become critical for efficiently operating and integrating renewable energy sources into electricity grids. The Destination Renewable Energy (DRE) project, a Use Case within the European Space Agency's Destination Earth (DestinE) platform, addresses these challenges by providing Hybrid Renewable Energy Forecasting System (HYREF), a hybrid (solar and wind) application for renewable energy forecasting in different time scales. HYREF leverages the DestinE Platform's extensive, high-quality global data catalogue, which includes outputs from high-resolution numerical weather prediction models, Weather-induced Extremes Digital Twin forecasts, and Data Lake resources such as Copernicus and ERA5 reanalysis data. By incorporating end-user-provided historical and real-time energy production data, HYREF enables precise forecasting for specific locations and energy infrastructures. The system combines numerical models with satellite-based Earth observation data to provide detailed information on solar and wind availability, covering spatial scales from individual rooftops to regional and national levels. The HYREF system is designed to be flexible, scalable, and user-driven, evolving through continuous interactions and feedback from end users and market stakeholders. Emphasis has been placed on user interface and user experience design to ensure that the application is not only functional but also intuitive and accessible. The system incorporates a user authentication interface that integrates with the DestinE Platform's Identity and Access Management component, providing secure access control and differentiating roles such as Production Site Manager and Weather Modelling Scientist. By providing precise and efficient forecasts for solar and wind power production, the HYREF system enables the combination of different renewable energy sources to ensure a steady energy supply. This assists policymakers, energy producers, and other stakeholders in optimizing resource allocation, improving energy efficiency, and formulating strategies aligned with global green and digital transformation objectives such as the Paris Agreement, the United Nations Sustainable Development Goals (SDGs), and the European Green Deal. Suitable for a wide range of users—from private rooftop owners to large-scale industrial facilities and national grid operators—HYREF maximizes the DestinE Platform's capabilities by synergistically using data and models. Leveraging DestinE's robust data infrastructure, which offers access to diverse, high-quality environmental data globally and high-performance computing capabilities, HYREF improves forecast accuracy, adapts to specific regional characteristics, enables what-if scenario testing to understand the impacts of different environmental conditions on renewable energy production, and significantly enhances its scalability. These advancements contribute directly to achieving international sustainability goals by facilitating the transition to clean energy sources and supporting measures to increase energy efficiency. In a nutshell, the DRE project represents a significant advancement in renewable energy forecasting. By leveraging cutting-edge technology and collaborative platforms, it addresses the pressing challenges of climate change and the global energy transition. HYREF provides actionable and meaningful information, serving as a vital tool for policymakers, energy producers, and other stakeholders. This not only supports the global shift toward sustainable energy solutions but also aligns with international efforts to achieve a greener and more resilient future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Generating a Digital Twin with CARS, a scalable open-source Multiview Stereo framework

Authors: Yoann Steux, David Youssefi, Loïc Dumas, Mathis Roux, Marian Rassat, Cédric Traizet, Tommy Calendini
Affiliations: Cs-group, CNES
CARS is a CNES open-source 3D reconstruction software part of the Constellation Optique 3D (CO3D) mission. CARS stands out from other MultiView-Stereo methods due to its highly parallelizable design, capable of addressing large volumes of data for processing on an HPC cluster or personal machine. It uses high-resolution images such as Pleiades and Spot images. This innovative pipeline applies advanced image processing techniques to generate precise 3D models of the Earth's surface. With the ability to extend, CARS can also facilitate the creation of digital twins, offering the possibility to visualize and interact with 3D models in virtual environments. This extension supports a range of physical simulations, including flood modeling and heat island analysis, which are valuable for urban planning, disaster management, and environmental monitoring. The flexible nature of the CARS framework unlocks new opportunities to apply satellite data across various domains, providing enhanced decision-making tools through realistic, dynamic digital models of the physical world.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Development of a General-Purpose Multi-Scale 3D Synthetic Scene Generator for Simulation and Analysis

Authors: Yves Govaerts, Dr Vincent Leroy, Mr Nicolae Marton
Affiliations: Rayference
Combining ground and satellite observations is crucial for validating space-based quantitative data, as these observations offer complementary information. Remote sensing measurements, whether ground-based or satellite-derived, are inherently influenced by both atmospheric properties and surface reflectance. However, ground-based up-looking observations are highly sensitive to atmospheric aerosol properties and only marginally influenced by surface reflectance, whereas down-looking satellite observations often exhibit a stronger sensitivity to the surface. Additionally, these different types of data are often collected at varying spatial scales, making direct comparisons challenging and necessitating the use of potentially unreliable upscaling approaches. Radiative transfer models are essential for providing a theoretical basis to interpret both space- and ground-based data. Typically, these models operate under simplified assumptions of a homogeneous atmosphere and surface, which limits their ability to accurately account for radiative processes occurring across different spatial and temporal scales. This highlights the need for more sophisticated approaches that consider radiative processes occurring at different scales for improved data validation and understanding. To address these limitations and understand the impact of surface heterogeneities on CalVal activities or for the design of new missions, Rayference is developing a general-purpose multi-scale 3D synthetic scene generator. This tool is designed to create customizable, detailed 3D scenes that can be tailored to various spatial scales, from micro-scale surface details to macro-scale landscapes. This generator allows the representation of detailed vegetation structure, water bodies, artificial surfaces, clouds, … By allowing for the assignment of optical properties and supporting the integration within Eradiate, our 3D open-source radiative transfer model, this generator allows the simulation of ground and satellite observations in a radiatively consistent framework. This generator is essential for advancing calibration and validation activities that require realistic simulations of complex environments. The modular nature of the tool ensures that it can be adapted for diverse use cases, from future mission preparation to advanced scientific research. The outcome is a powerful, flexible platform that enhances the capacity for detailed 3D scenes. Practical examples of applications of this synthetic scene generator will be shown, combining the simulation of ground and satellite observations. For that purpose, synthetic scenes corresponding to different land cover types are generated, and satellite images with different characteristics simulated. As the characteristics of the scenes are completely defined, these synthetic images can be used to benchmark retrieval algorithms. In conclusion, this synthetic scene generator could contribute to a 3D Radiative Digital Twin Earth Component dedicated to physically-based realistic modelling of the Solar radiation reflected by the Earth as seen from space and ground observations. It will support our understanding of the integration of multi-scale information. This feature will allow the generation of Big Data sets of realistic satellite images for training of AI-enabled algorithms, such as machine learning techniques, among others. It will also allow the verification of our understanding of the radiative Earth with a direct comparison between simulated satellite images and actual observations at various temporal and spatial scales. This development is funded by the ESA 3DREAMS project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.02.02 - POSTER - Terrestrial and Freshwater Biodiversity

Preserving the integrity and health of natural ecosystems, and the biodiversity they host is crucial not only for the vital services they provide to sustain human well-being, but also because natural ecosystems with a high degree of integrity and diversity tend to exhibit elevated levels of productivity and resilience. The importance of safeguarding biodiversity is increasingly recognised in many Multilateral Environmental Agreements (MEAs) which all place great emphasis on the sustainable management, restoration and protection of natural ecosystems.

The pivotal role of ecosystems in maintaining ecological balance and supporting human well-being is a unifying theme in MEAs. Taking note that despite ongoing efforts, biodiversity is deteriorating worldwide and that this decline is projected to continue under business-as-usual scenarios, Parties to the Convention on Biological Diversity (CBD) have adopted at the 14th Conference of the Parties in December 2022, the Kunming-Montreal Global Biodiversity Framework (GBF). The GBF represents the most ambitious and transformative agenda to stabilise biodiversity loss by 2030 and allow for the recovery of natural ecosystems, ensuring that by 2050 all the world’s ecosystems are restored, resilient, and adequately protected. In Europe, the EU Biodiversity Strategy for 2030 aims to put Europe’s biodiversity on the path to recovery by 2030, by addressing the main drivers of biodiversity losses.

The emergence of government-funded satellite missions with open and free data policies and long term continuity of observations, such as the Sentinel missions of the European Copernicus Program and the US Landsat programme, offer an unprecedented ensemble of satellite observations, which together with very high resolutions sensors from commercial vendors, in-situ monitoring systems and field works, enable the development of satellite-based biodiversity monitoring systems. The combined use of different sensors opens pathways for a more effective and comprehensive use of Earth Observations in the functional and structural characterisation of ecosystems and their components (including species and genetic diversity).

In this series of biodiversity sessions, we will present and discuss the recent scientific advances in the development of EO applications for the monitoring of the status of and changes to terrestrial and freshwater ecosystems, and their relevance for biodiversity monitoring, and ecosystem restoration and conservation. The development of RS-enabled Essential Biodiversity Variables (EBVs) for standardised global and European biodiversity assessment will also be addressed.

A separate LPS25 session on "Marine Ecosystems" is also organised under the Theme “1. Earth Science Frontiers - 08 Ocean, Including Marine Biodiversity”.

Topics of interest mainly include (not limited to):
•Characterisation of the change patterns in terrestrial and freshwater biodiversity.
•Integration of field and/or modeled data with remote sensing to better characterize, detect changes to, and/or predict future biodiversity in dynamic and disturbed environments on land and in the water.
•Use of Earth Observation for the characterisation of ecosystem functional and structural diversity, including the retrieval of ecosystem functional traits, (e.g., physiological traits describing the biochemical properties of vegetation) and morphological traits related to structural diversity.
•Sensing ecosystem function at diel scale (e.g. using geostationary satellites and exploiting multiple individual overpasses in a day from low Earth orbiters and/or paired instruments, complemented by subdaily ground-based observations).
•Assessment of the impacts of the main drivers of changes (i.e., land use change, pollution, climate change, invasive alien species and exploitation of natural resources) on terrestrial and freshwater ecosystems and the biodiversity they host.
•Understanding of climate-biodiversity interactions, including the impact of climate change on biodiversity and the capacity of species to adapt.
•Understanding of the evolutionary changes of biodiversity and better predictive capabilities on biodiversity trajectories,
•Understanding of the ecological processes of ecosystem degradation and restoration,
•Multi-sensor approaches to biodiversity monitoring (e.g. multi-sensor retrievals of ecosystem structural and functional traits),
•Validation of biodiversity-relevant EO products (with uncertainties estimation),
•Algorithm development for RS-enabled Essential Biodiversity Variables (EBVs) on terrestrial and freshwater ecosystems,
•Linking EO with crowdsourcing information for biodiversity monitoring.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Green(ing) Backbone: Spatiotemporal Vegetation Productivity Trends in the Carpathian Mountains

Authors: Daria Svidzinska, Karin Mora, David Montero, Oleh Prylutskyi, Oleg Seliverstov, Dr Volker Radeloff, Miguel Mahecha
Affiliations: Leipzig University, Falz-Fein Biosphere Reserve “Askania Nova”, V.N. Karazin Kharkiv National University, University of Wisconsin-Madison
The Carpathian Mountains, often referred to as the green backbone of Eastern Europe, are a hotspot of biodiversity and ecosystem services. With mountains warming faster than lowland regions, vegetation productivity in this region is expected to increase. Moreover, land use changes, such as reduced grazing pressure, are contributing to the transformation of vegetation cover. However, empirical analyses of these shifts in the Carpathians are still missing. Remote sensing observations are essential in addressing this gap. This study aims to leverage remote sensing advances to analyse spatiotemporal vegetation productivity trends in the Carpathian Mountains across the past 41 years. Specifically, we seek to answer the following questions: (1) How widespread is the greening signal in the Carpathians? (2) Are the greening trends associated with land cover classes? (3) Do the greening trends vary with altitude? (4) Do the greening trends change over time? To this end, we use all Landsat (satellites 4 to 9) images available in the Google Earth Engine from June to September over the period 1984 to 2024 at a resolution of 30 m for areas above 1,300 m. We thus focus on subalpine and alpine vegetation belts. We apply statistical corrections to account for variations in bandwidths across different Landsat sensors and harmonise comparable bands. To assess greening, we employ the Mann-Kendall test for trend with a correction for temporal autocorrelation in time series. The Theil-Sen’s (TS) slope estimator quantifies the direction and magnitude of change over time. Additionally, the Kendall rank correlation coefficient and two-sided p-value assess the strength and significance of the association between variables. We define greening as the increase in the yearly Normalised Difference Vegetation Index (NDVI) values derived from Landsat imagery, which are associated with statistically significant TS slope values and confirmed by strong to moderate Kendall coefficients. This study provides a comprehensive spatiotemporal assessment of greening for one of the largest mountain ranges in Europe with the highest possible level of spatial detail and temporal extent. It thus offers a route to better understand the changes in mountain environments and to further investigate their drivers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Estimating the Fraction of Green Vegetation Cover of Coastal Dunes Using Very High Resolution Imagery and Sentinel-2 in Southern Spain

Authors: Eva Romero-Chaves, Emilia Guisado-Pintado, Víctor F. Rodríguez-Galiano, Diego López-Nieta
Affiliations: University of Seville
Coastal dunes play a key role in coastal response, acting as an essential sediment reserve for beaches, especially during intense storms and spring tides. Vegetation in coastal dunes is a key element, since it promotes growth and stabilization while acting as the first line of defense against flooding and erosion, also facilitating beach recovery. Understanding the dynamics of dune vegetation becomes crucial to effectively promote sustainable management of coastal areas and reduce risk, particularly in highly populated environments. However, monitoring coastal dunes vegetation requires not only high-resolution satellite images given their spatial distribution but also multitemporal vegetation products to analyse seasonal to annual changes in vegetation coverage. The Vegetation Cover Fraction (FCover) is defined as the proportion of the ground surface covered by green vegetation observed from a nadir perspective. This parameter is a crucial tool to differentiate vegetation from soil in energy balance processes. Derived from structural canopy variables, such as the Leaf Area Index (LAI), FCover is largely independent of illumination geometry, making it a robust alternative to traditional vegetation indices for green vegetation monitoring. Furthermore, it is highly consistent despite resolution of the satellite images used due to its quasi-linear relationship with reflectances. FCover focuses exclusively on green vegetation, excluding other land cover types, which enhances its ability to monitor active vegetation. Although other FCover products exist in Copernicus Land, those are generated at moderate resolution (300 m) using Sentinel-3 data, which is not suitable for detailed monitoring of dune vegetation, which requires higher resolution to obtain accurate information on its dynamics. A multi-platform based methodology for retrieving the fraction of green vegetation cover (FCover) is tested in three coastal dunes systems in southern Spain: Cabopino in the Mediterranean and El Rompido and Punta Malandar in the Atlantic coast of Andalusia. Sites were chosen for being representative of the Atlantic and Mediterranean coast and thus having variable geographical and climatic conditions. The Atlantic coast is characterized by higher humidity levels and a more temperate climate, which promote greater vegetation growth and dynamic aeolian-induced dune processes. In contrast, the Mediterranean coast features a drier and warmer climate, resulting in less developed dune systems where vegetation is adapted to semi-arid conditions. The data subset considered Sentinel-2 and Very High Resolution (VHR) images such as Pleiades, Superview, Worldview and Spot (dating from 2017 to 2021), with spatial resolutions of 10 m and 2/4 m. Sentinel-2 Surface Reflectance (S2_SR) products (spatial resolution of 10 m) were chosen based on the closest date to the available VHR images using Google Earth Engine. VHR images were obtained from the National Geographic Institute of Spain (IGN). The percentage of a Sentinel-2 pixel covered by green vegetation (FCover) was computed by considering a threshold of 0.3 NDVI (Normalized Difference Vegetation Index) in co-collocated VHR imagery. The statistical distribution of FCover values was considered in order to avoid over-representation of very low and high values in the models. Various machine learning algorithms were evaluated such as: Random Forest (RF), Neuronal Networks (NN), Support Vector Machine (SVM) and Partial Least Squares regression (PLSR), Linear Regression (LR). A set of Sentinel derived variables were used for training the models including: NDVI, NDWI, NDSDI, NDESI y EVI and raw bands (B2, B3, B4, B5, B6, B7, B8, B11 and B12). Although the NDVI resulted on the strongest Kendall correlation with the calculated FCover all variables were used for the prediction. Preliminary results of the linear regression between VHR-based FCover and Sentinel-2 derived variables showed a fair agreement with R² of 0. 57 and RMSE of 21.62% but with a notable dispersion, evidenced by the overrepresentation of extreme values. This is evidenced by the fact that a single FCover value can be associated with a wide spectrum of NDVI values. Further, density analysis of FCover data indicates that in general high FCover values (100%) tend to correspond to high NDVI values (0.6-0.8), while low FCover values (0-20%) are mainly clustered in low NDVI ranges (0.1-0.3). Looking at the spatial distribution of vegetation cover (Fcover) applied to the dune systems of Cabopino, Punta Malandar and El Rompido revealed differentiated patterns in the FCover-NDVI relationship according to local characteristics. In Cabopino and Punta Malandar, high FCover values (100%) correlate with high NDVI (mean 0.63-0.70), indicating an optimal fit of the model in areas of dense and homogeneous vegetation. In El Rompido, on the other hand, a greater variability between FCover-NDVI relationship is observed with high NDVI values corresponding with low FCover percentages, suggesting a more vigorous vegetation, but with less coverage. On the other hand, in general, areas with low vegetation cover (FCover 4%) show lower NDVI values though with variable patterns between areas. For instance, in Cabopino a stable and homogeneous NDVI (0.19-0.24) is associated with low FCover, while Punta Malandar shows slightly higher values (0.20-0.29) and with more spatial variability. Finally, in El Rompido, NDVI values for low FCover ranges between 0.19-0.33, suggesting abrupt transitions between vegetated and non-vegetated areas. In this research, a new approach for estimating the fraction of green vegetation cover of coastal dunes is presented. Although results are representative of some dense vegetation areas across the study cases, the VHR images rescaling and the FCover representation as discrete values difficult accurate representation of the vegetation coverage in scarce vegetation areas and for some ranges of NDVI values. Next steps could explore the use Radiative Transfer Models to generate a Look Up Table, which will allow training FCover models at 10 m resolution. In addition, a spectral mixture analysis could be incorporated as a strategy to extract new variables as well as to explore the incorporation of phenological trajectories derived from the HR-VPP products as an input for the models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Coupled In-Situ/Remote Sensing Dataset for Macrophyte Research in Small, Temperate Lakes

Authors: Frederike Kroth, Bastian Robran, Dr. Katja Kuhwald, Dr. Thomas Schneider, Natascha Oppelt
Affiliations: Kiel University, Technichal University of Munich
As primary producers and structural habitat builders, freshwater macrophytes serve as foundational species within aquatic environments. The distribution of macrophytes in lake ecosystems is primarily influenced by a range of lake-specific abiotic driving factors, including water depth and light availability, water chemistry and temperature, or substrate characteristics and littoral slope. Different macrophyte species exhibit distinct preferences for these abiotic conditions, and at the same time respond uniquely to shifts within their habitats, making them long-term indicators for the ecological status of lakes. Globally, anthropogenic pressures have caused a notable decline in macrophyte diversity, as land use changes, climate shifts, and invasive species disrupt habitats and species composition. To monitor these changes, the European Water Framework Directive mandates macrophyte mapping every three years for lakes over 50 hectares, leaving smaller lakes understudied. While in-situ mapping is essential, it poses logistical challenges, is time-intensive and costly. Remote sensing techniques provide a promising supplement but rely on accurate in-situ data for calibration and validation. To address this, we present a comprehensive coupled in-situ/remote sensing dataset designed to bridge this gap and enable efficient, scalable monitoring of macrophytes in small, temperate lakes. Collected as part of the MARTINI project, the dataset integrates high-resolution multispectral aerial imagery, WorldView-2/3 data, and Sentinel-2 MSI time series with extensive in-situ measurements from 19 interconnected lakes in the Osterseen area (southern Germany). Over two growing seasons (2023–2024), we systematically captured macrophyte spectral signatures, biometric data, and habitat characteristics, alongside abiotic drivers such as water chemistry, temperature, and light conditions. Abiotic factors were systematically monitored over the vegetation periods. Water temperature was logged continuously at macrophyte growth sites, and dissolved oxygen, pH, conductivity, and Secchi depths were measured bi-weekly to monthly at multiple depths. Water samples were collected for analysis of chlorophyll-a, nutrients, and humic substances. Detailed macrophyte mapping, including biometric parameters and species composition, was conducted by divers and a hydroacoustic device along all lake shores. We will illustrate the potential of the dataset to advance remote sensing applications in aquatic ecology. This will include the development of algorithms for species-specific macrophyte mapping, habitat monitoring, and predictive modelling of macrophyte responses to environmental change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Species Distribution Modeling with Graph Neural Networks

Authors: Emilia Arens, Dr. Damien Robert, Professor Jan Dirk Wegner
Affiliations: EcoVision Lab, Department of Mathematical Modeling and Machine Learning, University of Zurich
Reliably modeling the distribution of species at a large scale is critical for understanding the drivers of significant biodiversity loss in our rapidly changing climate. Specifically, the task involves predicting the probability that a species will occur in a particular location, given the prevailing environmental conditions. Because field data capturing the true presence and absence of species is costly and limited, opportunistic citizen science data has emerged as a valuable source of data. However, it comes at the cost of lacking information on species absences and introducing strong data biases such as geographic bias toward urban areas and species bias towards conspicuous individuals. Therefore, species distribution modeling (SDM) has proven to be a challenging task, requiring the learning of highly complex interactions in a scarce data regime, where even the driver of sparsity is ambiguous as it can be both: the true rarity of a species as well as strong sampling biases. Traditional approaches from the field of ecology often tackle the task from a statistical perspective, fitting a per-species density function around the known occurrences. Thereby, individual species are modeled in isolation, which not only limits the taxonomic scalability, but also neglects the rich information found in species interactions. As a result, the growing body of literature attempting to solve the SDM task by leveraging neural networks is often motivated by the fact that these models are capable of fitting the individual distributions simultaneously. More precisely, deep learning allows meaningful representations to be learned from raw input data. In the case of SDMs, the currently proposed recipe is to learn a geospatial representation from diverse input variables, e.g. climatic rasters and elevation maps. The representation is learned jointly and is therefore shared by all species. It then serves as the conditioning variable to predict the likelihood of the presence of a species. Thus, neural networks implicitly express species interactions through the shared feature space. However, most of the proposed deep learning approaches do not yet significantly outperform traditional methods, raising the question whether the co-modeling potential of deep learning can be exploited in a more concrete and systematic way. Here, we suggest to learn and propagate representations between species explicitly using a graph structure, where the nodes represent individual species and the edges express species interactions. This setting not only allows an explicit reasoning about species interactions, but also opens the door for the integration of data sources that have received less attention in the SDM literature as compared to traditional remote sensing predictors. Along these lines, nodes and edges can thereby be equipped with per-species and inter-species attributes extracted either from the presence-only data, or from relevant external sources. In particular, given the sparsity of the data regime, the enriched graph structure should lead to more robust interaction inference, while making the results more interpretable by exploring edge information propagation. We implement the entire framework as a Graph Neural Network (GNN) reasoning over the proposed species graph, which can be flexibly combined with any neural network learning the aforementioned geographic representation from environmental data. Thus, the proposed structure should be seen as an extension of established approaches, allowing the model both to receive rich species interaction data and to reason about this additional information. In doing so, we open the door to multimodal approaches ranging from remote sensing raster data to tabular observational data to graphical interaction data in an end-to-end trainable regime. We expect that the integration of the GNN branch will lead to more robust performance, especially for rare or heavily undersampled species. By having constant access to large-scale species interactions, we assume that information on well-represented species can support decision making for less represented individuals. Thereby, we are targeting those species that are difficult to model but critical to drawing the right conclusions for conservation planning and biodiversity protection.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Employing Earth Observation in Habitat Modelling of Freshwater Macrophytes

Authors: Bastian Robran, Frederike Kroth, Dr. Katja Kuhwald, Dr. Thomas Schneider, Natascha Oppelt
Affiliations: Kiel University, Technical University of Munich
Freshwater macrophytes play a crucial role in freshwater ecosystems by enhancing habitat complexity, influencing trophic webs, cycling nutrients and providing multiple ecosystem services. However, macrophyte diversity is in global decline due to habitat degradation driven by anthropogenic activities and climate change. To address this threat, habitat suitability models (HSMs) have emerged as valuable tools for investigating the relationship between macrophytes and their changing environment. Nevertheless, few HSMs leverage remote sensing data to enhance predictive accuracy and scalability in freshwater settings. Our study addresses this critical gap by developing an HSM supported by Earth observation data specifically designed for small lakes. Small lakes comprise the majority of freshwater lakes but are often underrepresented in ecological studies due to their scale and monitoring challenges. We tailored our model for a series of small lakes in Southern Germany. Our analysis identified key environmental factors, including distance to groundwater inflow, lake depth, littoral slope, and availability of photosynthetically active radiation (PAR), as significant predictors of macrophyte occurrence. A distinctive feature of our HSM is the integration of Sentinel-2 MSI data to derive PAR availability at macrophyte growing depths. Unlike traditional, point-based methods for measuring light availability, Sentinel-2 derived PAR availability allows for spatially continuous data that can be scaled across numerous lakes. Additionally, by incorporating a time series of MSI-based PAR data, we have introduced a temporal dimension to the model, thereby facilitating the monitoring and prediction of changes in macrophyte habitats over time. This represents a significant advancement for understanding and managing dynamic freshwater environments. The modelled habitat suitability scores showed a robust correlation (R = 0.908) with actual macrophyte distributions, indicating that the approach effectively captures the conditions that influence macrophyte presence. Our approach allows for more nuanced, data-driven assessments of habitat conditions that can inform conservation efforts. Demonstrating the efficacy of GIS- and remote sensing-based HSMs, this study provides a foundation for potential applications in ecological conservation and resource management, particularly in smaller freshwater ecosystems where traditional monitoring is often limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of GEDI vegetation structure metrics in African savannas: Towards multi-sensor integration with Copernicus Sentinel data

Authors: Marco Wolsza, Dr. Jussi Baade, Andrew B. Davies, Sandra MacFadyen, Jenia Singh, Tercia Strydom, Prof. Dr. Christiane Schmullius
Affiliations: Department for Earth Observation, Friedrich Schiller University Jena, Department of Geography, Friedrich Schiller University Jena, Department of Organismic and Evolutionary Biology, Harvard University, Mathematical Biosciences Lab, Stellenbosch University, National Institute for Theoretical and Computational Sciences (NITheCS), Scientific Services, South African National Parks (SANParks)
Savanna ecosystems are characterized by their unique coexistence of herbaceous (grasses) and woody (trees and shrubs) vegetation. They play a crucial role in the global carbon cycle, in maintaining biodiversity, and supporting livelihoods, yet they are increasingly sensitive to global climate change impacts. Furthermore, their conservation has often received considerably less attention than that of forest ecosystems, although they cover approximately a fifth of Earth’s land surface and store a substantial portion of the total aboveground carbon on the African continent. Accurately characterizing the structural diversity of savanna woody vegetation is important to better understand ecosystem functioning and resilience, as well as changes in biodiversity patterns. Consistent monitoring using Earth Observation data remains challenging due to the pronounced spatio-temporal heterogeneity inherent to these ecosystems. Advances in Earth Observation, particularly the combination of different active remote sensing technologies, offer new opportunities to consistently monitor vegetation structural metrics (VSMs) describing variations in horizontal and vertical dimensions. While most recent studies have focused on canopy top height, other metrics such as foliage height diversity (FHD) are important to quantify structural diversity, which in turn is linked to biodiversity. Spaceborne lidar data from the Global Ecosystems Dynamics Investigation (GEDI) offers footprint-level measurements of VSMs, initially optimized for retrieval in dense forests. This study focuses on the assessment of GEDI VSMs using high-resolution airborne lidar data acquired for several, spatially distributed study areas in the savanna ecosystem of Kruger National Park, South Africa. As previous studies have highlighted, quality filtering of GEDI data is important but approaches differ significantly. We take into account recent findings that are relevant for areas of short stature, discontinuous vegetation. These are further complemented with our workflow developed to address savanna-specific challenges, incorporating MODIS Burned Area data and Copernicus Sentinel-2 time series. We present an assessment of GEDI VSM accuracy across varying configurations and identify key factors affecting retrieval quality in savanna environments. This research contributes to the development of a reproducible framework that integrates Copernicus Sentinel-1 Synthetic Aperture Radar time series data for wall-to-wall mapping of savanna woody vegetation. These insights will advance our understanding of multi-sensor approaches for monitoring structural diversity in savanna ecosystems, supporting improved biodiversity assessment and conservation planning.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Large scale monitoring of inland freshwater hydrologic parameters to study the functioning of aquatic environments that are being modified by climate change Example of the Garonne River basin

Authors: Jean-paul Gachelin, Eliot Lesnard-Evangelista, Mr Thibaut Ferrer, Mr Jean-Pierre Rebillard
Affiliations: vorteX-io, Agence de l'Eau Adour Garonne
Climate change is one of the most pressing environmental issues of our time, with significant implications across ecosystems, including inland freshwater systems. As global temperatures rise due to greenhouse gas emissions, inland water bodies such as rivers, lakes, and wetlands are experiencing noticeable warming with an average temperature rise of 0.5 degrees per decade. This increase water temperature is causing widespread changes in aquatic ecosystems, altering species distribution, biological processes, and ecosystem resilience: - Disruption of thermal stratification and mixing patterns - Altered species distribution and biodiversity loss - Enhanced eutrophication and algal blooms - Reduced oxygen levels and metabolic stress In the same time, climate change is increasing the frequency of extreme events such as floods and droughts. The Adour Garonne Water Agency (France) has decided to launch a research and innovation project to study the functioning of aquatic environments that are being modified by climate change, in terms of both hydrology (flooding, low water) and quality (water temperature, turbidity, etc.), considering the two aspects to be intimately linked. To carry out this experiment, which aims to provide a better understanding of the impact of climate change on the basin, it is crucial to deploy a significant number of instruments to test the effectiveness of the system. To date, only the vorteX-io device allows simultaneous acquisition of real-time quantitative and qualitative measurements. For this reason, the Agency has commissioned vorteX-io to provide water temperature and metrics with 150 vorteX-io micro stations on the Garonne River Basin as part of this project. The vorteX-io micro station is an in-situ device derived from space technology: the micro station has been designed as an earth observation nanosatellite that does not fly, but is installed above rivers to acquire in situ data with remote sensing instruments including: lidar, multispectral and thermaL infrared sensors, GNSS on board. Water parameters are transferred in real-time through GSM or SpaceIOT networks. Innovative and intelligent, lightweight, robust, and plug-and-play, the micro stations are equipped with unprecedented features that allow them to remotely and in real-time measure water temperature, provide contextual images and floods metrics (water levels, flow, rain rates). This instrument provides in situ datasets for calibration, validation and accuracy assessment of EO projects in space hydrology, i.e. in the ESA st3art project dedicated to the calibration and validation of Sentinel 3. The long-term vision is to cover river basins in Europe with an in-situ network, to be used at large scale as earth-observation in situ component either for monitoring water quality parameters or for extreme hazards monitoring such as floods and droughts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Land Cover Mapping in Conservation Areas: Machine Learning or Deep Learning Image Classification?

Authors: Uvini Senanayake, Dr. Scott Mitchell, Koreen Millard
Affiliations: Carleton University
Habitat loss, fragmentation, and land use change severely threaten biodiversity and ecosystem services. Area-based conservation is a globally recognized conservation approach focused on protecting specific geographic areas to preserve biodiversity, ecosystems, and natural resources by reducing the loss of habitats, maintaining population levels of species and providing a functioning environment for humans (Ferraro & Hanauer, 2015; Watson et al., 2014). Area-based conservation also contributes towards mitigating the impacts of climate change. Therefore, a growing need arises for area-based conservation as a nature-based solution for enhancing biodiversity conservation and mitigating the impacts of climate change. The increasing impacts of climate change and anthropogenic pressures pose significant threats to conservation areas, which require regular monitoring. Activities occurring in the surrounding and adjacent lands also pose risks to the health of the landscapes within conservation areas. Therefore, land cover mapping of conservation areas is crucial as it provides information about the distribution of the different habitats and ecosystems. It is also essential in identifying changes occurring in the conservation areas, and it serves as baseline data for ecological models such as species distribution models. Remote sensing has become a powerful tool in conservation biology, offering innovative ways to monitor, assess and manage conservation areas. Land cover is used as a measure of structural diversity, one of the three main categories of remote sensing-based essential biodiversity variables (RS-EBVs) that have been introduced to support regular monitoring of biodiversity from space (Pettorelli et al. 2016; Reddy et al. 2021). Open-access data and databases (e.g. Google Earth Engine Data Catalogue) have expanded the accessibility of remotely sensed data for researchers – including in the conservation field. Conservation area managers have the potential to use such tools to develop land cover maps for their conservation areas based on their requirements. Advancements in artificial intelligence (AI) have significantly enhanced remote sensing image classification techniques with machine learning (ML) and deep learning (DL) methods. Choosing a method for land cover mapping in conservation areas can be overwhelming for conservation area managers due to the variety of ML and DL image classification techniques available. Additionally, managers often face various challenges and constraints, such as financial limitations and limited availability of field data. This research aims to improve conservation area managers' awareness of utilizing ML and DL image classification techniques for land cover mapping in conservation areas. By increasing awareness of these advanced technologies, the study aims to equip conservation professionals with the tools to monitor and manage conservation areas more effectively. A critical review was conducted using the Web of Science database to examine the application of ML and DL techniques for land cover mapping in conservation areas utilizing medium and high-resolution remote sensing imagery. Based on the identified ML and DL classification methods, an analysis was conducted to evaluate the strengths and weaknesses of these methods and determine the most effective approaches to land cover mapping in conservation areas. Among various classification algorithms, Random Forest, Support Vector Machine, Artificial Neural Networks, and Convolutional Neural Networks are frequently used for land cover mapping in conservation areas. U-Net and Vision transformers are two emerging DL image classification techniques for land cover mapping. It is evident that the more complex and advanced models, such as DL models, have the potential to produce more accurate and efficient land cover maps. However, selecting advanced algorithms for land cover mapping is not always practical, especially for conservation area mapping. Factors such as the availability of remote sensing data, ground-truthing data for training and testing algorithms, the area of the conservation lands and the associated costs should be considered before selecting a suitable image classification algorithm. Often, ML algorithms are more suited for medium-resolution remote sensing imagery. Although ML and DL algorithms can be used with high-spatial-resolution data, DL algorithms might be more suitable. While ML and DL algorithms can be used with multidimensional remote sensing data, DL algorithms are more suited for handling multimodal data. The availability of training data is a significant determinant in selecting a ML or DL algorithm for land cover mapping. Field surveys are labour-intensive and time-consuming. Accessibility to some habitats is often limited, resulting in small, biased field samples. Manually generating training samples is also a time-intensive, expensive, and subjective task that requires expert knowledge. The performance of DL classification methods is closely related to the amount of training data, and the same is true for transformer-based methods, which is the major drawback of using DL classification methods for LC mapping in conservation areas. However, ML classification methods are easy to train and less sensitive to the quality of training data. DL classification methods require significant computational resources, such as GPUs, for training the models, making it less feasible for applications with limited access to high-performance hardware. Comparatively, ML methods are less computationally intense and can be performed in cloud computing platforms like Google Earth Engine. However, compared to ML models, DL models have more transferability. To conclude, selecting a ML or DL image classification technique should be determined based on the challenges and constraints the conservation area managers face. DL models can be used when the complexity of the input features is high. DL models require large, labelled datasets and significant computational resources, which increases the associated costs. Thus, selecting a ML image classification algorithm is more suitable if the training data is small or when computation resources are limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detection and Biclass Differentiation of Landscape Elements using Sentinel-2

Authors: Manuel Reese, Prof. Dr. Björn Waske
Affiliations: Universität Osnabrück
Landscape elements such as tree rows, hedgerows, grass strips and flower strips play a crucial role in supporting biodiversity and providing habitats for wildlife within finely structured agricultural landscapes. These features often serve as last refuges for various species in a landscape dominated by high-intensity agricultural practices. Preserving these landscape components is essential not only for the conservation of local flora and fauna, but also for maintaining the ecosystem services they provide, including pollination, soil stabilization and pest regulation. This study aims to develop a robust method for identifying landscape elements and to assign them to one of two main classes using Sentinel–2 data. One is dominated by grass and herbaceous plants while the other is characterized by woody plants, mainly shrubs and trees. For this purpose we are leveraging deep learning techniques and compare their efficiency. Specifically, we implement and compare two prominent classifiers: a U-Net architecture tailored for semantic segmentation and a transformer network renowned for its attention mechanisms. The U-Net architecture is inspired by Strnad et al. (2023), who follow a similar goal with aerial imagery. The general workflow of the transformer based classifier is inspired by the findings of Bazi et al. (2021). The implementation is realized in tensorflow’s Python API (Martı́n Abadi et al. 2015). Training and test data were manually annotated on very high-resolution aerial images (R-G-B-NIR) that were provided free of charge by the state of Lower Saxony. The study area is a randomly selected 21 square kilometer area southeast of the northern German town of Löningen in the Oldenburg Münsterland, a region characterized by intensive agriculture. Further “non-landscape element classes” (forest, impervious surfaces, permanent grassland, arable land and water bodies) are added from EU CAP data and CORINE data. Sentinel-2 images from the vegetation period of 2023 were collected and cloud masked using s2cloudless (Braaten 2023). By aggregating and composing images across different temporal resolutions and composition methods, we explored the feature representation of woody elements and herbaceous elements, by testing the different image composition methods in terms of their effect on model performance. We used the GEE Python API for image pre-prosessing (i.e., image collection, cloud masking, and image composition). In order to make the validation as independent as possible, we have compiled an additional data set that contains only landscape elements from CAP funding applications (Niedersachsen 2023). Our methodology involves assessing the impact of various temporal resolutions and image composition techniques. We experiment with approaches such as seasonal compositing — a median and a max value composition each of the entire time span and a bimonthly stack. The idea is to generate input data with varying spectral characteristics and temporal coverage. Through our experiments, we aim to determine how these factors influence the accuracy of landscape element detection models, ultimately guiding the selection of optimal data processing workflows for this task. The performance of the U-Net and transformer models is evaluated using precision, recall, and F1-score metrics, as well as time usage for training and classification which provide insights into their relative strengths and weaknesses in detecting the specified landscape elements. Preliminary results indicate that while the U-Net architecture demonstrates significant efficacy in pixel-level predictions, the transformer network excels in contextual understanding due to its ability to capture long-range dependencies within the image data. This research not only aims to contribute to the field of biodiversity monitoring within the intensely used agricultural landscape. By refining our detection methods, we evaluate the potential for near real-time monitoring of important habitats, which could significantly enhance agricultural decision-making and biodiversity initiatives. In conclusion, this study lays the groundwork for enhanced precision in detecting landscape elements within finely structured agricultural environments. By comparing the performance of U-Net and transformer architectures and evaluating the effects of image data characteristics, we provide a comprehensive framework that can be adapted for various applications in remote sensing and monitoring tasks. Future work will focus on including a qualitative assessment of the individual landscape elements in order to obtain a meaningful proxy metric on the state of biodiversity in agricultural landscapes. References Bazi, Yakoub et al. (2021). “Vision Transformers for Remote Sensing Image Classification”. In: Remote Sensing 13.3. issn: 2072-4292. doi: 10.3390/rs13030516. url: https://www.mdpi.com/2072-4292/13/3/516 Braaten, Justin (2023). Sentinel-2 Cloud Masking with s2cloudless Martı́n Abadi et al. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org Niedersachsen, ML SLA (2023). Landschaftselemente in Niedersachsen, Bremen und Hamburg. Strnad, Damjan et al. (May 2023). “Detection and Monitoring of Woody Vegetation Landscape Features Using Periodic Aerial Photography”. In: Remote Sensing 15, p. 2766. doi: 10.3390/rs15112766.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Biodiversity Assessment with Super-Resolution Techniques: A Sentinel-2-Based Approach for High-Resolution Habitat and Ecosystem Monitoring

Authors: Ramona Cennamo, Prof. Giovanni
Affiliations: University of Rome "Sapienza"
Introduction Biodiversity monitoring is crucial for understanding ecosystem dynamics, tracking species distributions, and informing conservation efforts. However, the spatial resolution of freely available satellite imagery, such as Copernicus Sentinel-2, often limits the accuracy of fine-scale habitat mapping and species assessments. This research investigates the potential of super-resolution (SR) techniques to enhance Sentinel-2 imagery, thereby improving its spatial resolution and enabling more precise biodiversity assessments. By leveraging advanced machine learning algorithms, this study aims to refine land-cover classifications, identify biodiversity hotspots, and monitor habitat fragmentation. Ultimately, this research aims to bridge the gap between medium-resolution satellite data and the high spatial detail required for ecological analysis, providing a cost-effective tool for biodiversity conservation monitoring. Background: Measuring biodiversity has become a cornerstone of ecosystem health assessments, as evidenced by its integration into the frameworks of important global initiatives such as the Group on Earth Observation Biodiversity Observation Network (GEO BON), the International Geosphere Biosphere Programme (IGBP), the World Climate Research Programme (WCRP), and the Committee on Earth Observation Systems (CEOS) Biodiversity task. Traditional biodiversity monitoring methods often involve time-consuming and resource-intensive field surveys with in-situ data collection. While satellite remote sensing offers certainly a valuable alternative for large-scale monitoring, the spatial resolution of freely available imagery like Sentinel-2 (10-20m) can be insufficient for capturing fine-scale habitat features crucial for many species. Super-resolution techniques, which reconstruct high-resolution images from low-resolution counterparts, offer a promising solution also thanks to the wider availability of GPUs at reasonable costs. Recent advances in machine learning, particularly deep learning, have led to significant improvements in SR algorithms, enabling the generation of sharper and more detailed images. Specific Objectives: This research aims to achieve the following specific objectives: - Evaluate the performance of different super-resolution techniques applied to Sentinel-2 imagery for biodiversity applications: This involves comparing the effectiveness of various SR algorithms, including both traditional methods (e.g., bicubic interpolation) and deep learning-based approaches (e.g., convolutional neural networks). The evaluation will consider factors such as image quality, processing time, and computational resources required. Additional effort is put on the transition of the algorithms on cloud environments. - Develop and test a robust workflow for integrating super-resolution techniques directly with traditional remote sensing methods: This objective focuses on creating a streamlined process for incorporating SR into existing remote sensing workflows and tested frameworks. This will involve the usage of methods for pre-processing Sentinel-2 data (cloud screening, geometric refining, etc.), applying SR algorithms, and integrating the enhanced imagery with traditional analysis techniques such as image classification and object-based image analysis. - Assess the impact of enhanced spatial resolution on biodiversity metrics: This objective investigates how the improved spatial detail from SR-enhanced imagery affects the accuracy and precision of biodiversity assessments. This will involve quantifying changes in land-cover classification accuracy, as well as evaluating the impact on biodiversity metrics such as consolidated species richness indexes, such as Shannon's diversity index, Rao's Q, Berger-Parker's index, Hill's, etc. - Validate the obtained results with the help of very high-resolution (VHR) datasets: To ensure the accuracy and reliability of the SR-enhanced imagery and subsequent biodiversity assessments, the results will be validated using VHR datasets (e.g., aerial imagery, LiDAR data). This validation will involve comparing the spatial patterns and biodiversity metrics derived from the enhanced Sentinel-2 imagery with those obtained from the VHR data. Data and Method: Before applying super-resolution techniques, we classify the original Sentinel-2 imagery dataset over predefined AOIs using standard methods (e.g., Random Forest, Support Vector Machines). We then assess the accuracy using a confusion matrix, overall accuracy, producer's/user's accuracy, and Kappa coefficient. This establishes our baseline performance. Secondly we apply our chosen super-resolution technique(s) to the Sentinel-2 imagery improving the native resolution. Then, we classify the enhanced imagery obtained using the same methods as the baseline. Finally we compare the accuracy metrics from the enhanced imagery classification to the baseline. This will quantitatively demonstrate how super-resolution improves the accuracy of land-cover mapping. In doing such we pay close attention to land-cover classes that are particularly important for biodiversity (e.g., specific forest types, wetlands, grasslands, etc.). First results show an increased heterogeneity provided by SR images which should lead to the detection of more diverse habitats within what was previously classified as a single homogenous area by the native Sentinel-2 data. In addition on some cases super-resolution reveal previously undetected habitat fragmentation that could impact species. Conclusions and outlook: First results of this work demonstrate the potential of super-resolution techniques to significantly enhance the spatial resolution of Sentinel-2 imagery for improved biodiversity monitoring. By integrating advanced machine learning algorithms with traditional remote sensing workflows, this study has shown how this cost-effective method, utilizing freely available data and open-source tools, makes advanced biodiversity monitoring more accessible to a wider range of stakeholders, including researchers, conservation practitioners, and policymakers. References: Schmitt, M., Hughes, L. H., Zhu, X. X. (2021). Super-resolution for multispectral remote sensing imagery: A review. Remote Sensing of Environment, 252, 112114. Skidmore, A. K., et al. (2021). Remote sensing of biodiversity: progress, challenges and opportunities. Ecological Informatics, 61, 101219. D., Nelson, T., ... & Hilker, T. (2018). Lidar sampling for large-area forest characterization: A review. Remote Sensing of Environment, 215, 19-46. Turner, W., Spector, S., Gardiner, N., Fladeland, M., Sterling, E., & Steininger, M. (2003). Remote sensing for biodiversity science and conservation. Trends in Ecology & Evolution, 18(6), 306-314. Rocchini, D., et al. (2016). Integrating Sentinel-2 and airborne laser scanning data for mapping plant species richness in a heterogeneous Mediterranean forest. Remote Sensing, 8(7), 572. Rocchini D., Matteo Marcantonio, Carlo Ricotta. (2017). Measuring Rao’s Q diversity index from remote sensing: An open source solution. Ecological Indicators, 72, 778-784. Shannon, C., 1948. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423.
LPS Website link: Enhancing Biodiversity Assessment with Super-Resolution Techniques: A Sentinel-2-Based Approach for High-Resolution Habitat and Ecosystem Monitoring&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping Tree Invasions in an Afromontane Ecosystem With Multidecadal Landsat and Sentinel-2 Data

Authors: Heather Cox, Dr. Patrick Hostert, Dr. Volker Radeloff
Affiliations: University Of Wisconsin-Madison, Humboldt Universität zu Berlin
Exotic trees associated with commercial forestry can pose serious ecological problems to native flora as they alter soil nutrient dynamics, deplete ground water supplies, accelerate erosion and modify wildfire behaviour. Pines and wattles (Pinus and Acacia spp.) are particularly problematic, as the very traits that make them good candidates for timber and pulp production - i.e., rapid growth rates and tolerance of a wide variety of soil and climatic conditions - also increase their invasiveness. Remote sensing has great potential as a tool for detecting and monitoring tree invasions. In this study, we aimed to map, quantify and assess change in the distribution of pines and wattles between 1990 and 2024 in an Afromontane study area, the Nyanga mountains of Zimbabwe. First, we calculated seasonal spectral-temporal metrics for all high-quality Landsat and Sentinel-2 images available since 1990. Second, we applied a random forest classifier to imagery collected between January 2022 and December 2024 to map the current distribution of pines and wattles. Finally, we used temporal segmentation of the combined Landsat-Sentinel time series to estimate the timing of invasion for each invaded pixel. Our results show that pines and wattles have spread well beyond plantation boundaries, now occupying over 3000 km² (more than 5% of the study area). The species are particularly concentrated along footpaths and logging roads, but have also invaded ecologically sensitive areas such as Nyanga National Park. The wattles appear to be more aggressive invaders than the pines in this study area, and in some cases, wattles have even invaded pine plantations. While most current plantations were established by 1990, the extent of invasion outside plantation boundaries has expanded substantially since then. We demonstrate the effectiveness of a two-step approach for monitoring invasive plants, where initial detection of invaded areas is followed by estimation of invasion timing. This contrasts with other methods that either i) rely on a limited subset of available images to detect invasion at discrete time points, or ii) apply change detection algorithms to spectral time series for all pixels and therefore struggle to differentiate between invasive plant spread and other vegetation greening trends such as native shrub encroachment. By examining the spread of wattles and pines across a large, heterogeneous landscape over 34 years, our study provides unprecedentedly detailed information that could improve models of invasion risk and cast new light on underlying ecological processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Point Clouds to Habitat Use: Insights into Female Roe Deer Resource-Risk Trade-Off

Authors: Johanna Kauffert, Sophie Baur, Alexandra Baumann, Dr Wibke Peters, Prof Dr Annette Menzel
Affiliations: Technical University Munich. Professorship of Ecoclimatology, Bavarian State Institute of Forestry, Research Unit Wildlife Biology and Management
Satellite remote sensing has been an invaluable tool in wildlife ecology for over four decades, enabling insights into species’ habitats and their corresponding behaviour with products such as land-cover maps and vegetation indices like the Normalized Difference Vegetation Index (NDVI). The recent advances and availability of very high-resolution remote sensing data—particularly aerial photogrammetry and LiDAR—have made it possible to derive fine-scale habitat structure parameters, particularly in forests. These developments offer unprecedented precision in characterizing habitat features critical for species’ ecology and management. As a use case, we here examined the trade-off in habitat use between resource acquisition and risk avoidance of the most abundant ungulate species in Europe, the roe deer (Capreolus capreolus), during its fawning season. We analysed the influence of fine-scale wooded habitat structures, derived from aerial photogrammetry and LiDAR remote sensing products, on the habitat use of female roe deer during the fawning period (April - June). Habitat use was tested using GPS-telemetry data of 32 females with confirmed parturition dates across three years and three study sites in southern Germany, resulting in 45 year-ID datasets. We found that pre-parturition habitat use was more affected by nutritional demands, displayed by increased use of mature stands, while during parturition, habitat use was shaped by high concealment and cover demands. After parturition, females rather displayed risk-avoiding behaviour by using young stands and stands with high canopy surface roughness. Our results not only provide valuable insights into the roe deer’s use of woody structures and possible hiding places of fawns but also demonstrate how fine-scale remote sensing products can enhance the analysis of habitat use at finer resolutions. With the emerging ubiquitous availability of aerial imagery and LiDAR, our study showcases the advantages these datasets offer for wildlife ecological research and evidence-based management strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Habitat suitability analysis of Asian elephants in Nepal-India transboundary region using machine learning and geospatial data

Authors: Binita Khanal, Dr Tiejun Wang, Dr Ashok Kumar Ram, Dr Olena Dubovyk
Affiliations: Institute of Geography, University of Hamburg, Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Department of National Parks and Wildlife Conservation, Babarmahal
Understanding the suitable habitat distribution of large and conflict-prone animals and how they use their habitat is crucial to preserving biodiversity and maintaining ecosystem integrity. The cross-border regions between Nepal and India are the natural habitats for Asian elephants. However, this region has experienced dramatic land cover and land use changes over the last several decades due to human pressure and infrastructure development. Thus, this study aims to predict habitat suitability and factors determining habitat suitability in the Nepal-India transboundary region. This study mapped the suitable habitats for Asian elephants in the Nepal-India transboundary region and analysed prominent factors influencing habitat distribution. For this, we modelled the habitat suitability of Asian elephants using geospatial data and an ensemble stacking species distribution modelling approach to establish key factors determining habitat suitability. To identify suitable habitats, this study employed remote sensing-derived bioclimatic variables, five vegetation-related variables, two topographic variables, and proximity to water bodies calculated using GIS techniques as predictor variables for modelling. Three commonly applied machine learning algorithms, viz. boosted regression trees (BRT), random forest (RF), and maximum entropy (MaxEnt), were selected as base learners for ensemble stacking modelling, and results were fused for final predictions. A total of 163 elephant presence points were collected from different sources and randomly divided into 70-30 for training and testing. The model performance evaluation using Area under the the curve (AUC=0.90) and True skill statistics (TSS=0.65) indicates robust performance of the stacking ensemble model. The result showed that 26,679 km2, approximately one-third of the total transboundary landscape areas, is suitable habitat for Asian elephants in the study area. Elevation, precipitation of the driest month and wettest month, and temperature of the warmest month were the key variables determining the habitat suitability for elephants in this region. Suitable habitats were found to be distributed mainly in lower-elevation vegetated areas. The overall predicted suitable habitat was a mix of forest and non-forest in almost equal proportion, suggesting a high overlap in space and resource use between elephants and humans. The study recommended strengthened transboundary conservation efforts, and special attention should be paid to highly dense human settlements around the protected area while implementing measures to mitigate the risks of conflict between humans and elephants. The study emphasised the potential distribution of the elephant’s habitat in the transboundary landscape and its implication on spatial planning for long-term biodiversity conservation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Time series of Sentinel-1 backscatter and coherence reveal shifts in inundation duration and timing in open and vegetated wetlands

Authors: Stefan Schlaffer, Peter Dorninger, Melina Frießenbichler
Affiliations: GeoSphere Austria, 4D-IT GmbH
In comparison to their extent at the global level, wetlands serve as habitat for a disproportionally large number of plant and animal species. They provide a multitude of other ecosystem services including water retention, which can help mitigate the impacts of floods and droughts, retention of pollutants, provision of food as well as important cultural services. Their ecosystem functions do not only depend on their extent and number but also on their inter- and intra-annual dynamics, i.e., inundation duration and timing, as well as on their internal structure and vegetation types. Drainage, damming and other hydraulic constructions affect these inundation dynamics, while climate change exerts further pressure on these vulnerable ecosystems. Efforts to restore degraded wetlands, which are undertaken, e.g., in the framework of the Nature Restoration Law of the European Union, require monitoring to quantify their effect on ecosystem functions. Synthetic aperture radar (SAR) systems constitute an optimal means for monitoring changes in inundation characteristics due to their relatively high spatial and temporal resolution and their sensitivity to the occurrence of surface water, even beneath vegetation depending on the right combination of water level, vegetation density and radar wavelength. We aimed at characterising the inter- and intra-annual dynamics in surface water extent at the shallow, subsaline Lake Neusiedl, which is located in the Pannonian lowlands of Eastern Austria. More than half of the lake surface is covered by one of the largest continuous reed belts in Europe, dominated by Phragmites australis. The study area also includes a number of soda lakes, which fall dry intermittently and are the only water bodies of this type found in Central Europe. The area is a Ramsar site of international importance due to its ecological significance, especially for bird populations. We analysed time series of backscatter and coherence with temporal baselines between 6 and 24 days acquired by Sentinel-1 over Lake Neusiedl between 2015 and 2024. We interpreted the time series with the help of a comprehensive set of reference data, including in-situ water levels, meteorological data, a LiDAR-based digital surface model and high-resolution optical imagery. Water surfaces were delineated using a Bayesian approach. The retrieval of open water was based on the typical specular backscatter signatures of smooth, open water surfaces, however, it was found to be impacted by wind and ice cover. The latter could partly be mitigated using the cross-polarisation channel of Sentinel-1. The reed belt showed a clear double-bounce signature during spring, caused by the interaction of the radar wave with the water surface and the stems of emergent P. australis vegetation. During a prolonged drought period, which lasted from 2019 to 2022, water extent in Lake Neusiedl and the surrounding soda lakes decreased significantly with respect to pre-drought conditions. Since 2023, water bodies in the region have shown a significant recuperation in terms of their water extent. The results hold significance for both the monitoring the impacts of prolonged droughts on wetland ecosystems and of the effects of restoration efforts. Future work will include the characterisation of the impact of the heterogeneity of the reed belt in terms of open water and vegetated areas, on the one hand, and the reed structure and age, on the other hand, on SAR backscatter and coherence.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Are hyperspectral vegetation indices based on multi-sensor data fusion better than pure multispectral indices in measuring trait-based functional diversity?

Authors: Nikolina Mileva
Affiliations: European Space Agency
Recent hyperspectral sensors such as EnMAP, DESIS, EMIT, and PRISMA have increased substantially the amount of hyperspectral data available enabling the use of imaging spectroscopy techniques on a wider scale. However, these sensors cannot yet provide time series long enough to describe important biodiversity patterns related to climate change and other phenomena, which are visible on a decadal timescale. Thus, the synergistic use of multispectral sensors providing long time series and hyperspectral sensors offering better spectral resolution is necessary. In this study, we explore the fusion of hyperspectral data from EnMAP and DESIS with Sentinel-2 to derive a number of vegetation traits commonly used to evaluate functional biodiversity. We focus on chlorophyll, carotenoid and water content, which we then use as inputs for calculating functional richness, divergence and evenness. We perform the same analysis using only multispectral data and evaluate the differences. This analysis will help us assess the added value of hyperspectral data for measuring functional diversity and give us insights into possible limitations stemming from the individual sensor characteristics. For the fusion of multispectral and hyperspectral data, we employ a set of well-known fusion techniques requiring a minimal input of data, as the availability of hyperspectral images is still rather limited. Airborne hyperspectral data from AVIRIS-NG acquired over the Bavarian Forest National Park is used for validation. Preliminary results show that multispectral and hyperspectral sensors have better agreement for lower values of chlorophyll content, while for larger values they tend to diverge (multispectral data showing lower estimates). For the chlorophyll to carotenoid ratio, the hyperspectral estimates are consistently larger than the multispectral ones. This study will demonstrate the feasibility of creating simulated hyperspectral time series and will showcase their relevance for biodiversity research. While here we concentrate on specific physiological traits, the methods employed are not sensor or band specific and can have broader field of application.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Environmental plague monitoring : desert locust prediction with artificial intelligence and stochastic model.

Authors: Maximilien Houël, Alessandro Grassi, Kimani Bellotto, Wassim Azami, Komi Mensah Agboka, Dr. Elfatih Abdel-Rahman, Dr. Bonoukpoè Mawuko Sokame, Dr. Tobias Landmann
Affiliations: SISTEMA Gmbh, ICIPE
Desert locusts are known as the world’s most destructive migratory pest. A single swarm can travel up to 150 km per day, have 80 million locusts and eat the same amount of food per day as 35.000 people. The pest is impacting on mid-to-long term on economy, quality of life and environmental protection. Climate change is amplifying the occurrence of such pests and especially the increase of extreme events such as cyclones, is generating ideal conditions for the locust breeding. In the context of the European project EO4EU and European Space Agency (ESA) project IDEAS, a service has been developed, divided in two parts: a first one as early warning to monitor suitable ecosystem for locust to breed, the second one as impact assessment simulating the evolution of swarms. The first part aims at predicting the favorable breeding ground for desert locust seven days in advance by checking environmental conditions of the previous fifty days. The environmental variables used for the forecast are Soil water content, Precipitation, and Temperature from ERA-5 land (Copernicus Climate). Additionally, NDVI (Normalized Difference Vegetation Index) from MODIS plays a role in the prediction. Locust information for model training was from a presence-only dataset provided by FAO’s Locust Watch. At the current stage, the most effective model is a customized version of Maxent. Maxent is a statistical model widely used by researchers for species distribution modeling (SDM) as it is designed to work with presence-only datasets, a common scenario in this field. Our model keeps Maxent's principles but modifies its internal structure by replacing the linear machine learning model with a GRU (Gated recurrent unit). This enables the model to learn complex patterns and better understand the temporal evolution of features. Since no locust absence information is provided, only two evaluation techniques have been proven useful: recall, which reaches 76%, and positively predicted area (amount of area predicted as locust breeding ground) which is at ~17%. Following the early-stage locusts appearance prediction, the second step aims at evaluating the geographic footprint that adult locusts will have within a two-week time frame. In particular, the focus is on forecasting migration patterns, as locusts are able to travel long distances in short periods and explore new areas unpredictably. The maps generated by the first part serve are the primary input for the second part. As they represent the probability of early stage appearances, they also provide an initial estimate of potential adults locations under specific environmental conditions. The strength of this model lies in its stochastic structure. Specifically, the model simulates an environmental-biased random movement on a 2D lattice, generating batches of diverse potential scenarios. This approach allows the incorporation of complex driving-factors for migrations and considers all the various paths that swarms may take. Climate conditions primarily influence swarm behavior, along with the availability of resources, such as vegetation. Specifically, the model uses temperature and wind data from ERA-5, as well as the Leaf Area Index (LAI) from ERA-5 Land. Collecting these variables is essential, as they not only trigger migration events but also determine the direction and speed of swarm movement. Consequently, the model performs a statistical analysis across all generated scenarios. This enables it to produce output maps that estimate the future locations of swarms and their potential sizes. Predicted results are showing promising correlation with FAO reports on desert locust activity. In order to lead to a fully operational tool, validation activities are on going, in cooperation with experts on desert locust and to provide ground verification of the prediction tool. Indeed due to the lack of open source datasets on desert locust, ground truth information is needed for validation. The International Center of Insect Physiology and Ecology (ICIPE) from Nairobi, Kenya, is focusing its mission on the study of insect science for sustainable development. Additional independent data were collected in Sudan by the Sudanese Ministry of Agriculture, department of crop protection. This data is corresponding to 698 validation points from 1st of January to 21st of March 2023. Their support for the tool could lead to high level testing of the models, and empower its capabilities to monitor at a larger scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integration of a multi-sensor analysis for the estimation of water quality in Italian lakes

Authors: Mariano Bresciani, Dr Nicola Ghirardi, Alice Fabbretto, Ludovica Panizza, Andrea Pellegrino, Monica Pinardi, Salvatore Mangano, Dr.ssa Claudia Giardino
Affiliations: CNR-IREA, Space It Up project, CNR-IBE, University of Tartu, University of Sapienza, NBFC
Phytoplankton and turbidity dynamics in lakes are complex and difficult to predict due to morphometric complexity, variable wind patterns, intensity of benthic-pelagic coupling, variable light availability and the inherent instability of ecosystems. To gain a better understanding of phytoplankton dynamics and to characterise water quality status in complex aquatic environments, data collected during the same day are needed. Remote sensing is a valuable tool for spatial/temporal analysis of inland water environments. However, the use of a single sensor can be limiting in highly dynamic environments, such as turbid/eutrophic shallow lakes, where wind and temperature significantly affect lake conditions. In this context, the aim of this study within the Space It Up project is to use a combination of hyperspectral and multispectral sensors to understand the intra- and inter-daily dynamics of three Italian lakes (Trasimeno, Varese and Garda) characterised by different optical properties. Across the three lakes under study, in situ instruments are available to provide continuous measurements of either reflectance or water quality throughout the day. Specifically, in Lake Trasimeno is located the spectroradiometer WISPstation, in Lake Garda the spectroradiometer Hypstar, and in Lake Varese a buoy featuring multiparameter probes and, for some periods of the year, the spectroradiometer JB-ROX. These in situ data reveal the higher diurnal variability of phytoplankton in Lakes Trasimeno and Varese, which is not evident in Lake Garda. The dataset includes more than 30 different dates between 2019 and 2024 and a total of 160 remotely sensed images from 14 different sensors. Specifically, six hyperspectral sensors (PRISMA, DESIS, EnMap, EMIT, PACE, and AVIRIS) and eight multispectral sensors (Landsat-8/9, Sentinel-2A/B, Sentinel-3A/B, MODIS-Aqua/Terra, VIIRS-SNPP/JPSS) were used. Level-2 images were downloaded and used as inputs to the BOMBER bio-optical model (Bio-Optical Model Based tool for Estimating water quality and bottom properties from Remote sensing images) to generate maps of water quality parameters (total suspended organic and inorganic matter and chlorophyll-a). To produce these maps, BOMBER model was parametrized using the inherent optical properties (IOPs) specific to the three lakes. Additionally, for Lake Trasimeno and Lake Varese, phycocyanin maps were also produced using a mixture density network (MDN) for sensors with a suitable spectral configuration. A comparison was then conducted between the remotely sensed images and the in situ data, evaluating both spectral and concentration levels. For Lake Trasimeno, the spectral analysis showed a strong overall agreement between the remotely sensed images and the WISPStation data (MAPE=28.3%, SA=12.2°), similar results were obtained for the lake Varese. However, for Lake Garda, the agreement was less robust, primarily due to atmospheric correction inaccuracies in the blue spectral region. Preliminary results on the concentrations of water quality parameters confirmed that the multi-sensor analysis was crucial to detect rapid changes in the turbid and productive lake (Trasimeno and Varese), mainly due to variations in temperature and wind, which would have been impossible to detect with a single sensor analysis. In particular, during the late summer period, the high growth of phytoplankton (cyanobacteria) in the waters during the day emerged, with maximum values recorded in the afternoon; turbidity values were also very variable throughout the day, strongly influenced by the wind. This study was carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact of Sentinel-2 light extinction data on lake temperature profile simulations in the 1D hydrodynamic General Lake Model

Authors: Najwa Sharaf, Guillaume Morin, Jordi Prats, Pierre-Alain Danis, Gabriel Orabona, Nathalie Reynaud, Thierry Tormos, Jean-Philippe Jenny, Olivia Desgue, Rosalie Bruel
Affiliations: Pôle R&D Ecosystèmes Lacustres (ECLA), OFB-INRAE-USMB, INRAE, Aix Marseille Univ, RECOVER, Team FRESHCO, Magellium, SEGULA Technologies, OFB, DRAS, Service ECOAQUA, Université Savoie Mont-Blanc, INRAE, CARRTEL, OFB, DRAS, Service ECOAQUA
This study deals with the integration of satellite-derived light extinction values into the General Lake Model (GLM) for French lakes. Light extinction or attenuation, a crucial parameter influencing lake hydrodynamics, is often overlooked in terms of its temporal variability. Commonly, it is assumed to be constant across long-term simulations in lake models such as the one-dimensional deterministic hydrodynamic GLM. However, this approach fails to capture the inherent variability of light extinction, which can fluctuate significantly on seasonal or even shorter timescales. Such oversimplification may lead to inaccuracies in simulating lake thermal dynamics. As a result, we derived light extinction data from Sentinel-2 satellite imagery using a semi-analytical water color algorithm. This dataset was validated against in situ measurements of Secchi disk depth from the French national water quality monitoring network. We compared GLM simulations made using both a constant light extinction value (0.5 m⁻¹) and Sentinel-2-derived values for the period 2015-2020. These dynamic inputs included annual averages, seasonal averages, linearly interpolated time series, and predictions generated using a Generalized Additive Model (GAM). Simulation outputs were evaluated against observed in situ temperature data at the surface, bottom, and along the water column, as well as for thermocline depth. Incorporating Sentinel-2-derived light extinction values generally enhanced the model accuracy, yielding lower RMSEs (Root Mean Squared Errors) compared to simulations using a constant extinction coefficient. However, some exceptions were noted where performance differences were not statistically significant or did not improve. This study discusses the relative strengths of different approaches for integrating variable light extinction into the GLM, identifying optimal strategies. To our knowledge, this study represents the first assessment of integrating Sentinel-2-derived light extinction data into the GLM, demonstrating their value for improving lake simulations and advancing the accuracy of hydrodynamic modeling. These findings underline the potential of coupling satellite remote sensing and models to improve the knowledge of environmental trajectories of lake ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Deep Learning Framework for Large Scale Land Cover Mapping: A Case Study in Ontario, Canada

Authors: Fariba Mohammadimanesh
Affiliations: Natural Resources Canada
Large scale land cover mapping, also known as semantic segmentation, is crucial for understanding the ecological characteristics of land surfaces and comprehending environmental changes. Frequent updates are necessary to capture dynamic shifts in land use, monitor the impacts of human activities, and ensure that decision-makers have the most current and accurate data for sustainable planning and management. As such, this study addresses the challenge of large-scale land cover mapping using advanced deep learning models. While state-of-the-art deep learning models have shown promising results for several remote sensing applications, their efficiency has yet to be explored for large scale semantic segmentation tasks. One existing problem is that their successful application highly depends on the availability of large amount of training data. To overcome this, we propose a two-stage classification system, combining Random Forest (RF) for initial land cover mapping and MobileUNETR, a lightweight hybrid convolution-transformer model, for refined land cover classification. Using Sentinel-1 and Sentinel-2 data, we produce the land cover map of Ontario at spatial resolution of 10m, aligned with the north American land change monitoring system (NALCMS) legend level I, comprising 11 classes (snow and ice class was removed in our analysis ). Our results demonstrate that MobileUNETR outperforms other models, such as UNet and PSPNet, in terms of both accuracy (accuracy approaching 85%) and efficiency, highlighting its suitability for large-scale land cover mapping application. As MobileUNETR is the only deep learning model with both convolutional and transformer block, the results confirmed the superiority of hybrid models for large scale semantic segmentation, given their capability in capturing both local and global features, which are essential for semantic segmentation of heterogeneous land cover classes with varying sizes and spectral signatures. This study provides a scalable method for deep learning-based land cover mapping with a high potential for the national-scale applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Space4Nature: Empowering Nature Recovery With People and Earth Observation Satellite Data

Authors: Dr Ana Andries, prof Stephen Morse, prof Richard Murphy, Ms Victoria
Affiliations: University of Surrey
Surrey County faces a critical biodiversity challenge, with nearly 12% of its native wildlife already lost, and a large portion of species under threat. Particularly, the decline of semi-natural habitats such as heathlands, chalk grasslands, and neutral and acid grasslands has made the county a red flag for conservation efforts. Space4Nature responds to the need for habitat restoration and biodiversity monitoring by integrating ecological surveys collected by citizen science with advanced remote sensing technologies and machine learning techniques to monitor and map Surrey’s key habitats. The Space4Nature project focuses on collecting ecological survey data from 1 m sq quadrants across Surrey County, involving citizens in the collection of critical biodiversity information such as key species, species abundance, species composition, and environmental characteristics. Since 2022, these surveys have been designed and conducted in collaboration with Surrey Wildlife Trust and Buglife. These citizen-led surveys have provided invaluable ground-level insights that have been used to support the calibration and validation of our remote sensing approaches. We used high-resolution PlanetScope imagery, at 3 m spatial resolution with 8 spectral bands, and derived a suite of vegetation indices to characterise habitat health and vegetation structure. Along with topographic parameters, soil data, and other environmental variables, the project applies machine learning (ML) techniques—specifically, Random Forest (RF)—to predict and map key habitats such as chalk grasslands, heathlands, and other grasslands with exceptional accuracy. The project has also explored supervised classification methods to enhance habitat mapping capabilities. Using Levels 3 and 4 from the UK Habitat Classification (version 2), Space4Nature has successfully classified habitats across Surrey County at a 3m resolution. These classifications have been cross-referenced with field data and existing habitat inventories, resulting in a comprehensive and accurate map of the region's biodiversity. The accuracy of our RF model and supervised classification for chalk grassland and heathland habitats has been particularly noteworthy, we achieved very high-performance metrics: mean squared error (MSE) values ranged from 6.346 to 6.637, while F1-Score, Matthews Correlation Coefficient (MCC), sensitivity, and overall accuracy were consistently in the 0.8-0.9 range. In addition, we conducted accuracy assessments for our ML models using independent ecological datasets provided by Surrey Wildlife Trust (SWT). These datasets, which include annually visited and restored chalk grassland and heathland sites, allowed us to verify our model's predictions. Our predictions matched 82% of the ecological sites visited by the SWT in the last 20 years. Moreover, Space4Nature is not only about scientific remote sensing exploration but also about practical applications and meaningful conservation outcomes. Our habitat mapping results have already guided real-world restoration efforts. Specifically, the project has provided key insights for Buglife, an organisation leading the B-Lines project, which is focused on creating and restoring pollination habitats across the UK. Based on the Space4Nature habitat maps, Buglife has identified over 100 hectares of suitable sites for habitat creation and restoration. These areas are critical to the B-Lines project’s goal of establishing a network of wildflower-rich habitats to support declining pollinator species. Importantly, Space4Nature has earned international recognition for its innovative approach to biodiversity monitoring and habitat restoration. This year, the project won the prestigious Geovation International Geospatial Award by Ordnance Survey for Nature theme. Furthermore, Space4Nature has been accredited by the Space Climate Observatory (SCO) for its contribution to climate action through habitat monitoring and restoration initiatives. This global recognition highlights the project's broader relevance beyond Surrey, positioning it as a model for how EO and citizen science can be harnessed for sustainable conservation actions and outcomes. For the upcoming year, Space4Nature aims to build on its successes by expanding its data collection efforts in collaboration with local stakeholders and citizen scientists. Further ecological surveys in the spring and summer of 2025 will collect additional data on the remaining habitats, such as neutral and acid grasslands, which require more data collection. By engaging more volunteers in the data collection process and refining our ML models through reinforcement learning, we aim to produce the most accurate and up-to-date habitat maps possible. The practical impact of Space4Nature will continue to grow as we expand partnerships with conservation organizations, local governments, and community groups. The project's data-driven approach not only informs habitat restoration but also offers a scalable solution to biodiversity monitoring that can be replicated in other regions facing similar conservation challenges. In conclusion, Space4Nature is an exemplary project that combines citizen science, EO data, and ML to address one of the most pressing environmental issues of our time—biodiversity loss. With its high levels of accuracy, practical conservation impacts, and international recognition, the project serves as a powerful example of how science and community action can come together to address global challenges such as the loss of biodiversity and critical habitats. We look forward to presenting the latest results on chalk grassland and heathland mapping at the conference, and to sharing how our innovative approaches are making a real difference in habitat conservation across Surrey County and beyond. .
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Project From Samples to Satellites – the deployment of hyperspectral satellites for optically complex northern inland waters

Authors: Pauliina Salmi, Pritish Naik, Jenni Attila, Daniel Atton Beckmann, Dalin Jiang, Konstantinos Karantzalos, Mirva Ketola, Ismo Malin, Linda May, Rebecca McKenzie, Kristian Meissner, Justyna Olszewska, Ilkka Pölönen, Jukka Seppälä, Michal Shimoni, Sami Taipale, Jussi Vesterinen, Peter Hunter
Affiliations: Faculty of Information Technology, University of Jyvaskyla, Finnish Environment Institute, Faculty of Natural Sciences, University of Stirling, National Technical University of Athens, Lake Vesijärvi Foundation, City of Lahti, UK Centre of Ecology & Hydrology, UKCEH, Kuva Space, Faculty of Mathematics and Science, University of Jyväskylä, The Association for Water and Environment of Western Uusimaa
Remote sensing of inland waters has always been the bottleneck of environmental observation. However, recent technological developments have made hyperspectral sensor technology commercially available, and satellites carrying imaging spectroscopy instruments capable of monitoring boreal latitudes are being launched. The natural topology of inland waters, and changes in weather and catchment activities can cause significant variations in their water quality. Hyperspectral satellites are of interest not only because of their good spatial resolution and rapidly increasing temporal frequency, but also because of their high spectral resolution. Potentially, when paired with robust unmixing models, this could enable detailed remote observations of optically complex inland waters. However, introducing new technologies and data products into practice is a major effort that requires multidisciplinary cooperation. Here we describe a project which started on 2023 and will end in 2027, made possible by cooperation between different institutes and satellite operators. In summer-autumn 2024, EnMAP [1] and PRISMA [2] hyperspectral satellite acquisitions were collected on Scottish and Finnish inland waters. This campaign was carried out on five water bodies of anthropogenic importance, with in-situ ground-truthing undertaken by the University of Stirling Forth-ERA programme [3] and the UKCEH Loch Leven water quality monitoring programme in Scotland, and the Lake Vesijärvi and Enäjärvi monitoring programmes in Finland. The satellite dataset obtained for the first year comprised 24 EnMAP and 13 PRISMA products with cloud-free pixels of the target lakes. When combined with Sentinel-2 data, these satellites yield a good frequency of observations, limited only by the number of cloudless days. Hyperspectral satellites have high potential to complement traditional satellites and in-situ water quality assessments due to their superior spatial coverage and detailed spectral information. In the forthcoming years, new satellites and ground truthing approaches will be added systematically. References [1] EnMAP (The Environmental Mapping and Analysis Program). Earth Observation Center EOC of DLR. [2] PRISMA (Hyperspectral Precursor of the Application Mission). Agenzia Spaziale Italiana (ASI). [3] Forth-ERA (Forth Environmental Resilience Array). https://www.stir.ac.uk/about/scotlands-international-environment-centre/forth-environmental-resilience-array/about-forth-era/.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote Sensing-Based Detection of Giant Hogweed: Integrating Machine Learning and Satellite Data

Authors: Petr Lukeš, Michaela Podborská, Kateřina Tajovská
Affiliations: Global Change Research Institute, Masaryk University
The timely detection and management of invasive species, such as Heracleum mantegazzianum (giant hogweed), is critical for preserving the ecological integrity and economic value of permanent grasslands. This study explores the application of remote sensing (RS) data combined with advanced machine learning (ML) techniques to monitor and map the spread of giant hogweed, with a focus on the Karlovy Vary and Plzeň regions in the Czech Republic, which are the most affected areas in the country. These regions are particularly vulnerable due to their extensive grasslands, and favorable conditions for the species' proliferation. Utilizing multispectral satellite imagery from Sentinel-2 and Planet, we employed a range of ML algorithms, including Random Forest (RF) and Support Vector Machines (SVM), as well as target detection methods like Matched Filter (MF). Our results revealed that ML algorithms, particularly RF and SVM, outperformed traditional methods in accurately classifying giant hogweed infestations. These algorithms leveraged the plant's distinct spectral characteristics, especially during its flowering phase, achieving user accuracies of up to 97% with Planet's high-resolution data. Although target detection methods such as MF showed promise for detecting dense and homogeneous infestations, they were less effective in identifying fragmented and scattered occurrences, which are typical in early-stage invasions. Spatial resolution emerged as a pivotal factor in detection performance. Planet's finer resolution (3-meter) facilitated the detection of small and dispersed patches of giant hogweed, offering a distinct advantage for precision monitoring in fragmented landscapes like those in Karlovy Vary and Plzeň. In contrast, Sentinel-2's moderate resolution (10-meter) proved more suitable for tracking large, contiguous infestations across extensive areas. This research demonstrates the feasibility of integrating RS data with ML approaches to improve the accuracy, scalability, and cost-effectiveness of invasive species monitoring. The findings are particularly relevant for managing giant hogweed in the Czech Republic's most affected regions and offer valuable insights for broader ecological management, precision agriculture, and policymaking.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Bridging Remote Sensing and Ecosystem Dynamics for Nitrogen Deposition: A Digital Twin Perspective

Authors: Mahmoud Ahmed, Enrico Dammers, Joris Timmermans, Martijn Schaap, Roderik Lindenbergh
Affiliations: Department of Geoscience and Remote Sensing, Delft University of Technology, Air Quality and Emissions Research, Netherlands Organisation for Applied Scientific Research (TNO)
Reactive Nitrogen deposition is a significant driver of biodiversity loss worldwide. It adversely impacts ecosystem dynamics, by directly altering individual species traits, or indirectly by changing the ecosystem structure and functioning. As such, nitrogen deposition needs to be tracked carefully to prevent severe consequences in the future. However, current monitoring instruments and modelling approaches face significant challenges in capturing the intra-ecosystem dynamics of reactive nitrogen. Current ground-based nitrogen monitoring is limited to a few select locations, providing insufficient spatial coverage for ecosystem-scale analyses. While satellite products, such as TROPOMI, offer broader spatial coverage, they are constrained to vertically integrated column values with relatively coarse horizontal resolutions (e.g., ~7 km × 3.5 km²). Moreover, essential nitrogen compounds, such as nitric acid, are not measured, further reducing their suitability for detailed ecosystem studies. Chemistry Transport Models (CTM), offer the capability to simulate the complex chain of processes governing atmospheric nitrogen flows and deposition rates. However, in these models, the biosphere-atmosphere interactions of nitrogen are often too generic and rely on fixed land surface characterizations. Recent studies have attempted to incorporate land surface changes using satellite-derived products. Nevertheless, the ecological relevance of these products remains limited for studying the impact of nitrogen deposition on the biodiversity, as they lack the vertical and horizontal consideration necessary to account for the complexities of ecological processes. This limitation is particularly critical for surface-atmosphere exchanges, which remain a significant source of uncertainty in deposition modelling and, consequently, for effectively addressing nitrogen's impact on Essential Biodiversity Variables (EBVs). In response to this challenge, our study aims to create high fidelity EBV products that are integrated within an Environmental Digital Twin (EDT). EDTs, with their ability to incorporate interconnected components, link environmental drivers and anthropogenic pressures to ecologically relevant trends in community composition, allowing for the evaluation of intervention measures. Remote sensing plays a critical role in establishing EDTs by providing essential variables that characterize both environmental processes and surface dynamics and link them to the their digital replicas. We focus on the Veluwe area in the Netherlands, a critical conservation and protection zone. In this study, we integrate ecosystem-specific state variables derived from satellite remote sensing to refine deposition rate estimates within a Chemistry Transport Model (CTM). By combining multispectral Sentinel-2 data with ultra-high-resolution (~30 cm) Pléiades Neo observations and leveraging the capabilities of Large Eddy Simulation (LES) models, we aim to deduce the parameters needed for upscaling nitrogen deposition using the LOTOS-EUROS CTM. We present the design and working principles of the Digital Twin framework, alongside initial results from the monitoring framework. Specifically, we demonstrate the impact of automated tree detection for species distribution modelling (from optical satellite remote sensing) and the characterization of vertical ecosystem structure (from airborne LiDAR) on nitrogen deposition estimates derived from LOTOS-EUROS. These simulations are conducted at a high spatial resolution of 100 m and finer, enabling detailed insights into nitrogen dynamics within the ecosystem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping 30+ Years of Mangrove Extent in Tanzania Using Historical Paper Maps and Remote Sensing

Authors: Helga Kuechly, Dr. Mwita M. Mangora, Sam Cooper, Simon Spengler, Dr. Makemie J. Mabula, Kelvin J. Kamnde, Dr. Carl C. Trettin
Affiliations: Institute of Marine Sciences, University of Dar es Salaam, World Wide Fund For Nature (WWF) Germany, Earth Observation Lab, Geography Department, Humboldt-Universität zu Berlin, East African Crude Oil Pipeline (EACOP), Western Indian Ocean Mangrove Network, Center for Forest Watershed Research, Southern Research Station, USDA Forest Service
Mangroves are vital ecosystems for biodiversity, coastal resilience, and climate action, yet long-term monitoring in Tanzania and Zanzibar has been hindered by inconsistent methodologies and limited historical data. This assessment combines historical paper maps from Tanzania’s 1989/1990 national mangrove inventory, satellite imagery, and extensive field data to map mangrove extent in 1990 and 2023, quantify changes, and inform sustainable management strategies. While Zanzibar’s mangroves were also mapped, historical paper maps were unavailable, necessitating exclusive reliance on remote sensing and field validation. The assessment combines the digitization of historical inventory maps, analysis of Landsat and Sentinel-1 and -2 imagery, training and validation data obtained from field validation using a custom mobile application and manual digitalization from Google Earth, updated with Planet NICFI monthly composites. The analysis harnessed the power of Google Earth Engine (GEE) python API applying the supervised Random Forest Modelling for mangrove classification and local expert knowledge of mangrove areas to map changes in the extent of mangrove forests, estimating the gain, loss and stable mangrove area between 1990 to 2023, with overall accuracy of 90% for 1990 and 94% for 2023. Results reveal a mangrove extent of 124,022 ha in 1990, declining to 106,054 ha in 2023 on the mainland. Stable mangrove areas totaled 93,761 ha, with 12,292 ha gained and 30,261 ha lost, representing a net reduction of 17,969 ha (14.5%) over 33 years, or 545 ha annually. Zanzibar’s mangroves were similarly assessed, with separate classification models tailored to ecological and geographical differences, enhancing accuracy. Validation highlighted challenges such as spectral confusion with coconut plantations and inland vegetation. Findings indicate significant mangrove loss driven by land-use change and governance challenges, including ineffective enforcement of harvesting bans. However, net gains in specific districts reflect the impact of conservation programs from the 1990s-2000s. These data inform ongoing national mangrove management strategies, action plans, and Tanzania’s Nationally Determined Contribution (NDC) for climate action. This methodology establishes a robust, scalable framework for monitoring mangrove ecosystems, emphasizing public data, repeatability, and integration with future assessments. It supports informed policy decisions, strengthens conservation efforts, and enhances coastal ecosystem resilience for communities reliant on mangrove resources.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite Remote Sensing for Riparian Vegetation Health Assessment

Authors: Hamid Afzali, Milos Rusnak
Affiliations: Institute Of Geography, SAS
Floodplain forests are simultaneously the most critical components of riverine landscapes, providing many principal functions and benefits for biodiversity, stabilizing channel banks, and preserving aquatic ecosystem integrity. The proposed methodology utilizes a combination of remote sensing data, along with documented information about riparian forests, to develop a robust framework for analysing riparian vegetation properties in large river systems. In this study, we used satellite-driven vegetation indices to investigate vegetation health, greenness, productivity, and functionality in response to climatic conditions and human intervention. More than ten vegetation indices were computed using the Sentinel-2 and Landsat imagery. Textural information was extracted using the Gray Level Co-occurrence Matrix (GLCM) and geomorphological filters, which were applied to high-resolution, preprocessed, and normalized satellite data. Subsequently, their spectral and spatial characteristics were classified through the Random Forest (RF) machine learning model. Furthermore, the Vegetation Condition Index (VCI) and Vegetation Health Index (VHI) have been used to assess vegetation health, particularly regarding environmental stressors such as drought, temperature extremes, and other climate-related variables. By systematic spatiotemporal monitoring of riparian vegetation, we demonstrated that utilizing multiple remote sensing data and machine learning techniques provides a robust framework for assessing vegetation health and functional changes over time.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Treesure: new data for small woody features monitoring at landscape scale

Authors: Francesco Saverio Santaga, Sofia Maria Lilli, Marzia Franceschilli, Stefano Marra, Camilla Bizzarri, Sara Antognelli
Affiliations: Agricolus Srl, CGI
Introduction Local Public Administrations (LPAs) highly benefit from geographical information, as GIS systems are increasingly used for landscape management. Small woody features (SWFs) are crucial components of the European landscape. They provide many ecological benefits such as acting as habitat corridors and enhancing biodiversity. They can also help regulate water cycles and prevent soil erosion, functions that are particularly important in agricultural areas. Their aesthetic and cultural value is also important as elements of traditional agricultural practices and rural heritage (https://land.copernicus.eu/en/products/high-resolution-layer-small-woody-features) Local public administration highly benefit from updated and detailed georeferenced information of SWFs, to ensure a more informed management of the land to increase its ability to provide ecosystem services, preserve and increase biodiversity, protect the soil and maintain typical landscape elements. State of the art Copernicus Land monitoring service provides a SWFs layer covering Europe in its enterity, with a spatial resolution of 5 m, classifying the SWFs in: patchy features and linear features, updated in 2018. Additionally, some LPAs have access to local GIS layers describing SWFs with relatively a high level of detail. However, the data are produced using non-standardized methodologies, and their update is very rare. Treesure solution Treesure data enable Local Public Administrations (LPAs) to identify and monitor the presence of Small Woody Features (SWF). This knowledge about SWFs will be delivered through geographical layers containing information accessible through the Treesure service. Treesure will be able to distinguish 3 SWFs classes: - woody patches, - riparian strips - rows of trees and hedges and tree belts. The production of data relies on a standard procedure that includes 3 steps: 1. the production of super resolution images from Sentinel 2. This step is based on an AI-based algorithm trained with high resolution images. It produces 1 m resolution images. 2. the identification of wooded areas from super resolution images using a machine-learning based algorithm. This algorithm applies machine learning techniques to detect wooded vegetation from super resolution images. 3. the classification of wooded areas in “woods” and “SWFs”, and the subsequent classification of SWFs. Landscape metrics and geometrical elaborations represent the base concepts of SWFs classifications. Additional land use-land cover data provided by LPAs have been used for data refinement when available. Treesure product has 3 m resolution. It is currently being produced and updated yearly for different Italian LPAs. Data performances were good, with the following results (2023): - Precision of SWFs class: 59% - False positives for SWFs class: 41% - False negatives for SWF class: 54,4% The performances resulted better than the baseline for 2023 in the selected areas (including Autorità di bacino distrettuale dell’appennino settentrionale, Regione Toscana, and Comune di San Giuliano Terme). Products are disseminated through the CGI Insula platform, an advanced Earth Observation (EO) Platform-as-a-Service designed to harness cutting-edge cloud technologies for big data analytics on EO datasets. Insula's architecture enables the efficient processing and analysis of massive data volumes, making it an ideal solution for Treesure challenges. The platform provides users with a seamless experience, offering an intuitive user interface for detailed analytics as well as accessibility through standardized Open Geospatial Consortium (OGC) interfaces, ensuring interoperability and ease of integration into diverse workflows. Conclusion Treesure represents a valuable and accessible dataset for LPAs. In fact, the semi-authomatic production of outputs guarantees a yearly product update, and the comparability of results across the different geographical regions. Performances resulted sufficient to the main LPAs need, that is related to landscape monitoring and management. Data resulted particularly suitable for LPAs, compared toto Copernicus service, due to the higher update frequency, higher spatial resolution and better performances at local level.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Automated Habitat Mapping Using High-Resolution Satellite Data in the “Sv. Juraj - Sv. Kajo” and “Osoje” Mining Areas

Authors: Katarina Barnjak, Dragan Divjak, PhD in Ecological and Biological Engineering Andreja
Affiliations: LIST LABS LLC
The project focused on automating the annual mapping of habitat types in the mining areas of “Sv. Juraj - Sv. Kajo” (300 ha) and “Osoje” (20 ha) in Croatia, addressing the need for efficient and precise land-use monitoring in ecologically sensitive areas. Habitat mapping plays a crucial role in ensuring compliance with environmental regulations, preserving biodiversity, and guiding sustainable land-use practices. In this context, the project employed advanced geospatial methodologies and high-resolution satellite imagery from the PlanetScope platform to produce accurate, standardized datasets that meet both national (NKS1, NKS2) and European (EUNIS, Natura 2000) habitat classification standards. PlanetScope imagery was selected due to its high temporal frequency, spectral resolution, and spatial detail, enabling comprehensive year-round monitoring. The data's temporal richness allowed for capturing dynamic changes in vegetation and land use throughout the seasons. Key metrics such as the Normalized Difference Vegetation Index (NDVI) were calculated to assess vegetation health and classify habitats effectively. The combination of these advanced satellite datasets with machine learning techniques for supervised classification ensured a robust methodology for delineating and tracking habitat transitions over time. One of the main achievements of the project was the development of the habitat classification and map for 2024, based on the reference habitat map from 2023, which served as a starting point for the analysis. This 2024 map enabled the detection of spatial and temporal variations, providing stakeholders with a reliable tool for monitoring land-use dynamics and mitigating potential environmental risks. The results demonstrate significant progress in automating habitat mapping, reducing the time and effort required for traditional manual methods while ensuring high accuracy and consistency of outputs. The application of automated workflows, developed in R and Python programming environments, further enhances the scalability and replicability of this methodology. These workflows ensure efficient data processing and future updates with minimal human intervention, providing a cost-effective and adaptable solution for long-term monitoring programs. By utilizing PlanetScope imagery, the project established a replicable framework that can be applied to other regions facing similar challenges, promoting broader adoption of automated habitat mapping technologies. In addition to its technical advancements, the project underscores the importance of leveraging geospatial technologies for environmental stewardship. Automated habitat mapping equips policymakers, environmental managers, and local communities with actionable insights, empowering them to address ecological challenges more proactively. By aligning with international conservation efforts, this approach supports biodiversity preservation and sustainable development goals, ensuring a balance between industrial activities and environmental integrity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Vegetation disturbance alert from HLS (DIST-ALERT) – applications for all land monitoring

Authors: Sarah Carter, Song Zhen, Fred Stolle, James MacCarthy, Jessica Richter, Annabel Burg, Anika Berger, Benjamin Wielgoetz, Elise Mazur, Elizabeth Goldman, Dr. Matthew Hansen, Amy Pickens
Affiliations: World Resources Institute, University of Maryland, Wageningen University
Meeting biodiversity, climate and sustainable development targets means preventing the loss of valuable natural ecosystems. There is a need for actionable (timely, accessible, easily analyzable, spatial explicit) vegetation change data to understand the full changes and stresses that are happening on natural ecosystems because of human, natural or climate drivers. The recently released disturbance monitoring system known as DIST-ALERT developed through the NASA-funded OPERA project, in collaboration with the University of Maryland (UMD) and Land & Carbon Lab (a World Resources Institute and Bezos Earth Foundation initiative) is the first global operational system to detect disturbances in all vegetation cover, including forests, grasses, shrubs and crops. Based on Harmonized Landsat Sentinel-2 data, alerts are triggered when the observed vegetation coverage is more than 10% below the minimum value in the historical baseline (+/- 15 days in the previous 3 years). It has been operating since 1 Jan 2023 (Pickens et al. 2024). A number of data layers are provided such as the maximum vegetation anomaly, detected disturbance count and disturbance duration, and these provide information useful for better understanding of the alerts. This information has also been used to categorize alerts into low and high confidence, and along with other information, to identify potential causes of alerts such as wildfire, and conversion of land to agriculture. This classification provides users with policy relevant information for example it can help to determine whether potential violations of the EUDR have taken place. The DIST-ALERT crucially enables continuous monitoring and early notifications on possible deforestation, which is important for enforcing the regulation, and for other tasks such as identifying and taking action on illegal deforestation. Specifically combining the DIST-ALERT with the Natural Lands Map (Mazur et al. 2024) provides a timely indication of where forest loss in natural (primary, naturally regenerating) forests is happening, and can identify potential degradation. Combination of DIST-ALERT with VIIRS active fire alerts, can identify potential wildfires which may be less likely to be associated with violations of EUDR for example. While the EUDR is currently limited to forests, the potential expansion to Other Wooded Land will be discussed in future. Since these alerts operate in all land covers, this data can therefore support an expansion of the scope. Inside of tree cover, DIST-ALERT have been compared to existing forest disturbance alerts, and the use of different thresholds (e.g. maximum vegetation anomaly) can be used to develop complementary and integrated products. Agreement with current alerts however varies within different canopy densities, and across continents, and results from these comparisons, and implications for use alongside other forest disturbance products will be presented. Presenting open and free data in easy to access formats and with tools and workflows is crucial to support uptake. The DIST-ALERT will be integrated into the Global Forest Watch (GFW) platform and the new-in-development Land & Carbon Lab (LCL) platform to ensure data gets widely distributed. This will include GFW Pro, a dedicated platform for deforestation / conversion free supply chain assessments. At present, GFW has over 7 million worldwide active users including decision makers, forest rangers and many other stakeholders who can use the DIST-ALERT product and its derived data for monitoring their landscapes or ecosystems of interest. The integration of the DIST-ALERT product into these platforms will facilitate timely information flow to decision makers and stakeholders on where, when, and the extent of changes in their areas of interest. This will support action to prevent the conversion of natural ecosystems while continuing to meet the world’s growing need for food, timber and other goods. Future research opportunities, such as utilizing 10 m HLS data, the integration of the DIST-ALERT with other alert products to increase confidence and timeliness (e.g. Reiche et al. 2024), and classification of drivers using automated approaches in near-real-time (e.g. Slagter et al. 2023) will also be discussed. References Mazur, E., M. Sims, E. Goldman, M. Schneider, M.D. Pirri, C.R. Beatty, F. Stolle, & Stevenson, M. (2024). “SBTN Natural Lands Map v1: Technical Documentation”. Science Based Targets for Land Version 1-- Supplementary Material. Science Based Targets Network. https://sciencebasedtargetsnetwork.org/wp-content/uploads/2024/09/Technical-Guidance-2024-Step3-Land-v1-Natural-Lands-Map.pdf Pickens, A., Hansen, M., & Zhen, S. (2024) Product Specification Document for Disturbance Alert from Harmonized Landsat and Sentinel-2. Observational Products for End-Users from Remote Sensing Analysis (OPERA) Project OPERA Level-3 Disturbance Alert from Harmonized Landsat-8 and Sentinel-2 A/B Product Specification Version 1.0 JPL D-108277 D-108277_OPERA_DIST_HLS_Product_Specification_V1.0.pdf Reiche, J., Balling, J., Pickens, A. H., Masolele, R., Carter, S., Berger, A., Gou, Y., Donchyts, G., Slagter, B., Mannarino, D., & Weisse, M. J. (2024). Integrating satellite-based forest disturbance alerts improves detection timeliness and confidence. Environmental Research Letters. Slagter, B., Reiche, J., Marcos, D., Mullissa, A., Lossou, E., Peña-Claros, M., & Herold, M. (2023). Monitoring direct drivers of small-scale tropical forest disturbance in near real-time with Sentinel-1 and-2 data. Remote Sensing of Environment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Characterizing alpine vegetation communities using a multi-scale approach employing UAV and spaceborne Earth Observation

Authors: Basil Tufail, Baturalp Arisoy, Elio Rauth, Antonio José Castañeda Gómez, Dr Martin Wegmann, Dr Mirjana Bevanda, Dr. Doris Klein, Univ.-Prof. Dr. Stefan Dech, Prof. Dr. Tobias Ullmann
Affiliations: Earth Observation Research Cluster, Department of Remote Sensing, Institute of Geography and Geology, Julius-Maximilians-University Würzburg, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR)
Climate change has caused significant alteration to various ecosystems and is considered the influencing factor for biodiversity loss in the previous decades (1). Alpine regions are one of the most vulnerable and adversely damaged in this regard as well. Within the alpine ecosystem, there is a notable division between the different types of flora depending on factors like altitude, slope, and thawing frequencies. Literature has shown less emphasis on the sub-nival to nival vegetation including fellfields (2). Whereas they serve as an important indicator to monitor patterns in snowpacks, and vegetation changes linked to climate change and warming of the specific biome. Besides, global warming with its intensified impact on the alpine region exerts an influential role to the carbon cycle in the ecosystems as well. However, there are still research gaps in the implications of the extensive greening and shifting of the tree line to higher elevations that have been identified in recent studies, while focusing on the dwarf shrubs and sub-nival vegetation communities regarding the Net Ecosystem Exchange of CO2 can help understand the role they play either as carbon sources or sinks. This study focuses on the collection and analysis of in situ data from various ground sensors installed along an altitudinal gradient in the Zugspitze area, Germany. The approach will make use of the long-term environmental records provided by the Environmental Research Station Schneefernerhaus (UFS), a unique research facility located just below the summit of the Zugspitze at a height of 2,650 m in the German Alps. The aim is to understand the impacts of climate change on the alpine tundra vegetation communities, utilizing a multi-sensor fusion of remote sensing satellite data and including high-resolution airborne data from Unmanned Aerial Vehicles (UAV). Exemplarily, vegetation indices like NDVI, acquired at different spatial resolutions, can serve as a basis for identifying spatial patterns and their dynamics in time. Besides, parameters like Solar Induced Chlorophyll Florence (SIF), Aridity index (AI), and soil moisture along with climatic records on temperature, precipitation, and humidity can help better understand the ecosystem response and their changing traits. This is vital for climate change adaptation and mitigation attempts in the montane environments raising the question of anthropogenic intervention
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SAR-based solution for Ecosystem Functional Type identification in cloudy regions

Authors: Mr. Marek Ruciński, Ph.D. Edyta Woźniak, Ph.D. Anna Foks-Ryznar, Ms. Ewa Gromny, Ph.D. Lluís Pesquer Mayos, Ph.D. Cristina Domingo-Marimon, Ph.D. Małgorzata Jenerowicz-Sanikowska, Mr. Michał Krupiński
Affiliations: Space Research Centre of the Polish Academy of Sciences, GRUMETS research group. CREAF Bellaterra (Cerdanyola del Vallès)
The Ecosystem Functional Types (EFTs) allow identification of areas characterised by similar matter and energy exchange between biotic and abiotic components of ecosystem. The use of EFTs can be valuable for identifying early-stage changes in ecosystem functioning. In this regard, remote sensing methods are enable multi-temporal EFTs assessment based on optical data derived vegetation indices, i.e. the Normalized Difference Vegetation Index (NDVI) or the Enhanced Vegetation Index (EVI) [1]. However, the operational use of EFTs can be constrained by the characteristics of cloud cover over the analysed region, which can significantly affect their effectiveness and reliability. Synthetic Aperture Radar (SAR) offers a weather-independent alternative to optical remote sensing, ensuring reliable data acquisition even in regions with persistent cloud cover. Despite its promise, the relationship between SAR-derived features and vegetation indices remains underexplored, often limited to specific land cover types [2]. This study investigates whether the correlation between radar (Sentinel-1) and optical (Sentinel-2) data is sufficient to develop a SAR-based proxy for Ecosystem Functional Types (EFTs) that operates independently of atmospheric conditions. The region of interest is located in Central Africa covering northwestern part of Tanzania, Burundi, Rwanda and western part of Democratic Republic of Congo. The analysed time range covers the period of 2019-2021 what corresponds to two growing seasons of satellite delivered data. The typical climate conditions for the region is characterized by two rainy seasons, what leads to a very high seasonal cloudiness already confirmed by an analysis of cloudy pixel percentage on Sentinel-2 data. For the period from December 2018 to December 2022, the average percentage of cloudy pixels in the series of individual images acquired for the polygons was 49.8%, with a standard deviation of 35%. The median value for this data was 47.6%. First, we calculate coherence matrices [3] and the H/α decomposition for dual polarization (VV + VH) [4] Sentinel-1 radar images. Then, we developed a model to correlate polarimetric features with NDVI calculated from Sentinel-2 images. The model was tested using 1,000 random points distributed across the scene. Preliminary results indicate a Pearson correlation between the radar model and NDVI of 0.67 with p < 0.001 level of significance. The major discrepancies occur in the urban areas where strong radar signal is not related to the vegetation. The calculation of EFT using SAR based inputs showed peculiarities of vegetation behaviour which were not clearly seen while using optical images. The main differences were the visibility of the rainy seasons throughout the year, resulting in two cycles of vegetation productivity and a specific seasonality of growth. Research performed under ARICA project: NOR/POLNOR/ARICA/0022/2019-00, Norway Grants, POLNOR2019, co-financed from the State budget, Applied Research. The European Union’s Horizon 2020 research and innovation programme under EOTIST project, grant agreement No 952111 [1] Domingo-Marimon C., Jenerowicz-Sanikowska M., Pesquer L., Rucinski M., Krupinski M., Wozniak E., Foks-Ryznar A., Abdul Quader M. (2024) “Developing an early warning land degradation indicator based on geostatistical analysis of Ecosystem Functional Types dynamics”, Ecological Indicators 169: 112815. DOI: 10.1016/j.ecolind.2024.112815. [2] Ruciński M, Foks-Ryznar A, Pesquer L., Woźniak E., Domingo-Marimon C., Jenerowicz-Sanikowska M., Krupiński M., Gromny E., Aleksandrowicz S. (2023) "The Multi-Temporal Relationship Between Sentinel-1 SAR Features and Sentinel-2 NDVI for Different Land Use / Land Cover Classes in Central Africa", IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 325-328, doi: 10.1109/IGARSS52108.2023.10281862. [3] Cloude, S.R. and Pottier, E. 1997. An entropy based classification scheme for land application of polarimetric SAR. IEEE Transactions on Geoscience and Remote Sensing, 35 (1), 68–78; 10.1109/36.551935 [4] Cloude, S. R. 2007. The Dual Polarisation Entropy/Alpha Decomposition: A PALSAR Case Study. POLinSAR 2007, the 3rd International Workshop on Science and Alications of SAR Polarimetry and Polarimetric Interferometry, ESA, Frascati, January 22–26.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Canopy Reflectance as a Proxy for Soil Microbial Communities at a Regional Scale

Authors: Angela Harris, Prof Richard Bardgett
Affiliations: The University of Manchester
Soil microbial communities are integral to terrestrial ecosystem processes, yet their spatial and temporal dynamics remain poorly understood. While climate and soil properties are recognized as primary drivers of microbial composition at broad scales, emerging evidence highlights the significant influence of plant community composition and functional diversity. Specifically, plant traits - such as leaf chemistry and morphology - are linked to microbial composition through their effects on nutrient cycling and decomposition. However, the extent to which above- and below-ground communities co-vary predictably across landscapes and environmental gradients is unclear, presenting challenges for forecasting ecosystem responses to global change. Canopy reflectance captures a range of plant traits related to ecological processes. Plant traits, and therefore canopy spectra, may reflect soil microbial communities. However, the extent to which canopy reflectance can help elucidate soil microbial community composition across biomes remains unclear. Using datasets from 14 NEON (National Ecological Observatory Network) ecoregions (domains) we explore links between aboveground plant traits and belowground soil community composition and develop partial least squares regression models, using airborne imaging spectrometer data, to predict the abundance of soil microbial groups derived from phospholipid fatty acid analysis (PLFA; including gram positive bacteria (G+), gram negative bacteria (G-), saprophytic (SF) fungi, arbuscular mycorrhizal (AM) fungi, actinomycetes, total microbial biomass and the ratios G+:G- and fungi: bacteria) and the relative abundance of commonly found bacterial phyla derived from 16S rRNA gene sequencing (including; Acidobacteria, Actinobacteria, Proteobacteria and Verrucomicrobia). Our results provide evidence that plant traits are associated with the abundance of diverse soil microbial groups and bacterial phyla at a regional scale (hundreds of kilometres), particularly when the abundance of microbial groups are characterized by PLFA analysis. Foliar traits were able to explain a unique proportion of modelled variation in soil microbial community composition and jointly through their association with soil properties. Our ability to predict soil microbial abundance using partial least squares regression models, with spectral reflectance as the independent variables, was greatest for microbial abundances characterised by PLFA analysis (R² = 0.57 – 0.85, nRMSE = 10 – 15 %) whereas the relative abundance of bacteria phyla proved more challenging to predicted (R² = 0.01 - 0.41; nRMSE = 20 - 25 %). Of the PLFA soil microbial groups the abundance of G+ bacteria and AM fungi were most well predicted, whereas Actinobacteria and Acidobacteria where the best predicted bacterial phyla. Our results suggest that spectral reflectance data holds promise as a novel indirect indicator of soil microbial community composition at the regional scale, particularly for broad functional soil microbial community groups identified by PLFA analysis. Spatial maps of key soil microbial community groups obtained from remotely sensed data could bridge an important gap in field-measured soil microbial community composition and improve our understanding of ecosystem function over space and time.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Strategic Framework for Biodiversity Conservation: AI and Open Source Data for Protected Area Prioritization

Authors: Katharina Horn, Ass. Prof. Dr. Christine Wallis, Prof. Dr. Birgit Kleinschmit, Jun. Prof. Dr. Annette Rudolph
Affiliations: Technische Universität Berlin, Department for Artificial Intelligence and Land Use Change, Technische Universität Berlin, Geoinformation in Environmental Planning Lab
Biodiversity is increasingly under threat due to a range of human-driven processes, including climate change, deforestation, land use change, and habitat destruction. Additionally, biodiversity and climate change mutually intensify each other, which is why action needs to be taken to prevent irreversible damage to ecosystems. Biodiversity is not solely about species richness; it encompasses a broader range of dimensions, including taxonomic, phylogenetic, genetic, functional, spatial, temporal, and interactional aspects. Consequently, these dimensions need to be taken into account in the development of strategies aiming at hindering biodiversity loss. Ensuring the protection and recovery of threatened species and ecosystems is essential for safeguarding the future of life on Earth. In response to this biodiversity crisis, the European Union has introduced the "EU Biodiversity Strategy for 2030," which aims to protect 30% of both terrestrial and aquatic ecosystems within EU member states by 2030. However, in Germany, there remains a lack of comprehensive strategies to identify and prioritize the most critical areas for conservation. Therefore, the challenge of developing effective approaches for selecting and managing these protected areas needs to be addressed adequately. To identify protected areas in Germany, a set of influencing factors such as topography, soil types, species abundance, distribution patterns, land use changes, and more need to be included into the analysis. Given the increasing amount and complexity of data necessary to assess these factors, there is a growing need for advanced tools to derive valuable information from the available data. Artificial intelligence (AI), particularly reinforcement learning, offers a promising solution to this challenge. Reinforcement learning models are capable of learning from large datasets and making decisions based on predefined goals, such as optimizing land protection strategies. These models can analyze vast amounts of data, allowing for the identification of areas that are most suitable for conservation based on a variety of ecological parameters. In this regard, citizen science data (e.g., iNaturalist) presents a valuable resource, providing on-the-ground insights on species occurrences at different locations. The data is collected by citizens and provides significant potential in complementing remote sensing and other data sources. This study aims to apply reinforcement learning techniques to identify potential protected areas within German forests. The methodology will follow a three-step approach. First, we will assess the quality of existing data, which includes remote sensing products that provide valuable information on land use and land cover, climate, soil characteristics, and vegetation. The selected datasets will then be pre-processed. Second, we collaborate with nature conservation agencies and ecological experts to identify key parameters that influence biodiversity protection. Finally, the validated data and expert input will be integrated into the AI model, allowing it to prioritize areas that have the highest potential for conservation and recovery. This study seeks to advance the application of AI in biodiversity conservation by developing a robust, data-driven approach for identifying priority protected areas in Germany. By combining approaches of artificial intelligence with citizen science and expert knowledge, we hope to contribute to the broader goal of halting biodiversity loss and fostering ecosystem resilience in the face of global environmental change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Temporal Dynamics in Ecosystem Functional Attributes (EFAs) and Types (EFTs): Approaches and Lessons Learned

Authors: Dr. Cristina Domingo-Marimon, Dr. Lluís Pesquer, Dr. Małgorzata Jenerowicz-Sanikowska, Marek Ruciński, Dr. Edyta Woźniak, Dr. Joan Pino
Affiliations: CREAF, Space Research Centre of the Polish Academy of Sciences
Understanding and monitoring ecosystem dynamics is essential for addressing global challenges such as land degradation, biodiversity loss, and food security. Ecosystem Functional Attributes (EFAs) and Ecosystem Functional Types (EFTs), derived from remote sensing, have emerged as powerful tools for characterizing and monitoring ecosystem dynamics using remote sensing data. EFAs are quantitative descriptors of ecosystem functioning, derived from satellite-based vegetation indices such as NDVI or EVI, which capture the exchanges of matter and energy between biotic communities and their environment. EFAs form the basis for defining EFTs, which group ecosystems based on shared functional attributes rather than structural or compositional characteristics, providing a novel approach to ecosystem categorization without requiring prior knowledge of vegetation types or canopy structure. The application of EFAs and EFTs in ecosystem monitoring and assessment has gained significant traction due to their ability to provide rapid, integrative insights into ecosystem responses to environmental changes. Unlike traditional land cover maps that emphasize structural attributes with consolidated properties, EFAs and EFTs focus on functional aspects of ecosystems, offering a more dynamic and responsive measure of ecosystem health. Indeed, the ability of EFAs to respond more rapidly to change than structural or compositional attributes makes them particularly valuable in the context of global environmental changes and biodiversity monitoring. This functional approach has proven particularly valuable in species niche characterization, biodiversity modeling, and as an early warning system for ecosystem changes. Our analysis is based on lessons learnt from different applications in span diverse climatic regions including Tanzania, Ethiopia, Bangaldesh , Spain, Siberia and Amazonia. Additionally, the integration of EFA/EFT analysis with other geostatistical methods, such as variogram-based analysis, enables a deeper understanding of spatial patterns and ecosystem dynamics and has proven effective for early detection of land degradation, often surpassing traditional land cover change analysis in sensitivity and timeliness. The potential of EFAs and EFTs as early warning systems for ecosystem degradation is a research topic of growing interest. While traditional land cover change analysis may detect changes only after critical thresholds have been surpassed, EFA-based approaches offer the possibility of identifying ecosystem transitions leading to critical points before they occur. This early detection capability could significantly improve degradation mitigation actions and reduce associated economic costs. Despite their potential capabilities, the application of EFAs and EFTs points out interesting challenges. On the one hand, the temporal resolution of EFA and EFT analysis is a critical factor in their effectiveness. High temporal resolution is critical for capturing dynamic ecosystem processes, their trends and the anomalies. Cloud cover and data availability often constrain the production of annual EFT maps, particularly in fragmented or heterogeneous landscapes. On the other hand, the choice of spatial resolution significantly impacts results, especially in complex ecosystems like the Mediterranean, where coarse resolutions may overlook critical heterogeneity. Most previous EFT studies have been based on low or medium spatial resolution data from sensors such as AVHRR, SPOT-VGT, or MODIS, with only a few studies utilizing higher resolution data from Sentinel-2 or Landsat missions. These studies have primarily focused on static approaches (just one year or mean of a period), highlighting the need for more research into the interannual variability of EFTs, that offers a promising approach in the field of functional ecology. The current review is based on the works carried on different satellite platform/sensors: Envisat MERIS, Landsat TM, ETM+, OLI, Sentinel-2 MSI, with the corresponding different spatio-temporal resolutions (Domingo-Marimon et al. 2024; Pesquer et al 2019). In conclusion, the use of EFAs and EFTs in remote sensing analysis represents a significant advancement in our ability to monitor and understand ecosystem dynamics. These functional approaches offer several advantages over traditional structural analyses, including increased sensitivity to change, the ability to capture seasonal and short-term variations, and potential application as early warning systems. However, challenges remain in terms of optimizing spatial and temporal resolutions and applying these methods across diverse ecosystem types. As remote sensing technologies continue to advance, the integration of EFA and EFT approaches with other analytical methods promises to enhance our capacity for ecosystem monitoring, conservation planning, and sustainable resource management in the face of global environmental changes. References: Domingo-Marimon C, Jenerowicz-Sanikowska M, Pesquer L, Rucinski M, Krupinski M, Wozniak E, Foks-Ryznar A, Abdul Quader M (2024) Developing an early warning land degradation indicator based on geostatistical analysis of Ecosystem Functional Types dynamics. Ecological Indicators 169: 112815. DOI: 10.1016/j.ecolind.2024.112815. Pesquer L, Domingo-Marimon C, Cristóbal J, Ottlé C, Peylin P, Bovolo F, Bruzzone L (2019) Comparison of ecosystem functional type patterns at different spatial resolutions in relation with FLUXNET data. Proc. SPIE, Vol. 11149: 1114908. DOI: 10.1117/12.2533049.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing spectral-functional diversity relationships though scales in a monoculture experiment

Authors: Javier Pacheco-Labrador, M. Pilar Martín, Rosario Gonzalez-Cascon, Vicente Burchard-Levine, Lucía Casillas, Victor Rolo, David Riaño
Affiliations: Enviromental Remote Sensing and Spectroscopy Laboratory (SpecLab), Spanish National Research Council (CSIC), National Institute for Agriculture and Food Research and Technology (INIA), Spanish National Research Council (CSIC), Tec4AGRO. Institute of Agricultural Sciences (ICA). Spanish National Research Council (CSIC), Forest Research Group, INDEHESA, University of Extremadura
Grasslands and grassland-dominated ecosystems, such as tree-grass ecosystems, play a fundamental role in the global carbon balance and the population’s subsistence in vulnerable regions. Protecting the entire range of grasslands’ ecosystem services requires assessing the influence of management and environmental drivers on these services and the role of biodiversity in their provision. While remote sensing offers tools to help monitor and better understand plant properties, functions, and diversity in grasslands and, therefore, the relationships between diversity and ecosystem function, grass plant sizes limit this potential. Still, remote sensing could offer vegetation diversity proxies to reveal diversity’s role in ecosystem functions and services, even if individuals cannot be distinguished. In this study, a monoculture experiment was implemented with 7 herbaceous species, including C3 and C4 grasses, legumes and forbs typical of Mediterranean grasslands to assess the capacity of hyperspectral data to detect intra- and inter-specific differences in foliar functional traits of pasture species at different phenological stages, and their plastic responses to water shortage. The experiment included 42 plots (1.5x1.5 m), with six replicates of every other species, organized in two blocks. Water regimes were manipulated to simulate typical versus water stress conditions. W assess how the relationships between spectral and plant functional diversity vary at different spatial scales in a monoculture experiment. Leaf, canopy, and drone spectral measurements are combined with laboratory estimates of plant functional traits. We assess the capability of different spectral measurements to decipher the inter and intraspecific variability of the different species’ functional diversity and what functional traits dominate the relationships at each scale. The study includes the temporal dimension, as the experimental plots are measured across their phenological development. Results revealed clear effects of phenology on the spectral diversity and the significant role of the LWC in the spectral-functional relationships, suggesting SWIR hyperspectral sensors could contribute to characterizing the diversity in grasslands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping and monitoring of natural and artificial floating materials in aquatic environments using PRISMA data

Authors: Erika Piaser, Dr.ssa Federica Braga, Mariano Bresciani, Dr.ssa Claudia Giardino, Dr. Paolo Villa
Affiliations: National Research Council (CNR), Institute for Electromagnetic Sensing of the Environment (IREA), Politecnico di Milano, National Research Council (CNR), Institute of Marine Sciences (ISMAR), National Biodiversity Future Center (NBFC)
The potential of hyperspectral data for the detection and differentiation of various floating materials, such as macrophytes and oil spills, has traditionally relied on proximal and airborne data due to the lack of suitable spaceborne platforms. The launch of the PRISMA mission by the Italian Space Agency (ASI) has made it possible to apply hyperspectral techniques to satellite data in practical scenarios. Within the PANDA-WATER project, we have developed and tested applications using PRISMA data to monitor various natural and artificial floating materials, focusing on three key products: algae scum and floating macrophyte cover, macrophyte status and oil spills. The first product uses specific spectral features of surface reflectance bands in the VNIR and SWIR as input to machine learning classification models. These models, optimised for distinguishing algae scum from floating vegetation, have achieved accuracies higher than 99% across multiple geographic regions, including Europe, Africa, Asia and the Americas. The second product uses narrowband and broadband spectral indices as proxies to generate continuous maps of macrophyte functional traits at both canopy (e.g. fractional cover, leaf area index) and pseudo-leaf (e.g. pigment content, leaf mass per area) scales. This mapping of macrophyte status was carried out on plant communities in northern Italian lakes, covering different community types, species and phenological stages. The third product focuses on oil spill detection by extracting spectral features sensitive to the presence and abundance of oil on water surfaces. These features, derived from reflectance spectra in the VNIR and SWIR regions, allow oil and water to be distinguished through optimised thresholding techniques. The oil spill detection methodology was validated using PRISMA-like data resampled from AVIRIS flights over the 2010 Deepwater Horizon oil spill in the Gulf of Mexico and tested on PRISMA data from the 2023 Oriental Mindoro oil spill in the Philippines. These products demonstrate the potential of hyperspectral satellite data for monitoring aquatic environments, offering high accuracy and applicability across scales and environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating the Impact of Atmospheric Correction on PLSR-Based Vegetation Trait Retrieval

Authors: Christiana Ade
Affiliations: Nasa Jpl
Mapping vegetation traits is critical for understanding ecosystem processes, informing conservation strategies, and assessing environmental changes on a global scale. As we strive to develop universally applicable trait retrieval algorithms, it is vital to investigate how atmospheric correction influences Partial Least Squares Regression (PLSR)-based trait maps, particularly given the diverse surface reflectance retrieval methods employed by different agencies. Using airborne flight data from NEON over Colorado, we conducted a cross atmospheric correction trait map comparison. Three PLSR models were trained on images processed with different atmospheric corrections: ACORN, ISOFIT, and ATCOR. Each PLSR model was then applied to three separate images, each processed with one of the atmospheric corrections. We focused on retrieving three key vegetation traits—foliar nitrogen (N), leaf mass per area (LMA), and leaf water content (LWC)—which are of high ecological interest and have been demonstrated to perform well across platforms. Results reveal significant variability in PLSR trait maps, indicating that a globally applicable PLSR model cannot accommodate imagery processed with differing atmospheric corrections without adjustment. This suggests that cross-agency PLSR models will be challenging to standardize. However, encouragingly, some traits exhibit high consistency when models are applied to images processed with the same atmospheric correction used during model training. This finding underscores the potential for cross-agency collaboration through the Level 3 products or by recalibrating trait models to specific atmospheric corrections using shared image locations and field data. These insights highlight the importance of considering atmospheric correction in trait retrieval workflows and provide a pathway for supporting global vegetation trait mapping efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Developing a Data Cube for Biodiversity and Carbon Dynamics Assessment in Estonia with Remote Sensing data

Authors: Oleksandr Borysenko, Jan Pisek, Mirjam Uusõue, Alexander Kmoch, Holger Virro, Wai Tik Chan, Eveli Sisas, Ats Remmelg, Marta Jemeljanova, Evelyn Uuemaa
Affiliations: University Of Tartu
We are constructing a comprehensive data cube at the national level for Estonia, leveraging remote sensing and geospatial data to advance biodiversity (BD) and carbon (C) dynamics research. The full potential of fusing active (LiDAR, radar) and passive remote sensing have not been developed and used yet. Moreover, multi-temporal (seasonal) feature sets, consisting of numerous combinations of spectral bands, can hold the potential to predict compositional vegetation classes. The combination of these approaches is promising for applications in biodiversity mapping and modelling. Remote sensing data, including Sentinel-1, Sentinel-2, Landsat, and high-resolution airborne LiDAR, which will be sourced from open repositories (e.g., Copernicus Open Access Hub). Vegetation indices and other derivatives will be calculated and integrated into the data cube. To capture multi-temporal aspect, we will use principal component analysis. Our framework organizes analysis-ready spatial data remote sensing in a data cube at the national level, enabling efficient retrieval, storage, and extraction of spatial and temporal extents from input and project-generated datasets. The data cube includes tiled and hierarchical variables at multiple resolutions. Complementary vector data, such as experimental site and habitat information, will also be included. Sentinel-2 spectral diversity indices are explored as proxies for biodiversity, providing an initial evaluation of the spectral diversity hypothesis under Estonian conditions. Biodiversity assessment employs spectral species concepts and k-means clustering to analyze gridded remote sensing data, producing 2D α- and β-diversity heterogeneity maps. We employ monthly and seasonally composite Sentinel-2 images processed in Google Earth Engine using the Cloud Score+ S2_HARMONIZED dataset. This dataset is produced from the harmonized Sentinel-2 L1C collection, enabling the identification of relatively clear pixels and the effective removal of clouds and cloud shadows. Diversity indices are calculated using the biodivMapR library, enabling robust and reproducible biodiversity assessments. Additionally, high-resolution (10 m) ecological descriptors of vegetation and terrain are generated using airborne laser scanning point data from the Estonian Land Board. We demonstrate and discuss the potential as well as limitations of novel spectral species indices and multitemporal frameworks for the automatic mapping of vegetation types in complex landscapes. Explainable artificial intelligence (XAI) (i.e. Random Forest) models will be trained using Scikit-learn and Shapley on these harmonized datasets to model biodiversity and carbon stocks/emissions at the national level. This scalable approach has the potential to enhance environmental monitoring and inform sustainable land management strategies across Estonia and similar regions
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mobilizing Animal Movement Data to Make Better Maps of Functional Fragmentation in African Savannas

Authors: Lorena Benitez, Dr Jared Stabach, Dr Kate Parr, Dr Mahesh Sankaran, Dr Casey Ryan
Affiliations: University Of Edinburgh, Smithsonian Conservation Biology Institute, University of Liverpool, University of Pretoria, University of the Witwatersrand, National Centre for Biological Sciences, Tata Institute of Fundamental Research
Field data are currently underutilized by remote sensing scientists mapping fragmentation. Traditional assessments of fragmentation, such as the landscape division index, rely on distinguishing differences in vegetation structure. This works for forests, but in naturally patchy, disturbance-driven ecosystems like savannas where there is a mix of high and low biomass areas, fragmentation is difficult to detect in this way. Instead of relying on vegetation structure to delineate fragmentation in savannas, we suggest incorporating measures of landscape functionality, specifically connectivity, into fragmentation assessments. A wealth of potential field data exists which can be used to calibrate and validate functional fragmentation models and associated maps. In particular, large volumes of animal movement data exist that can be used as ‘labels’ to train models of functional fragmentation. Animal-borne tags produce thousands of points per individual and provide detailed spatiotemporal information regarding landscape connectivity. Data on animals provides many opportunities for adding functional context to fragmentation maps, since animals often strongly influence vegetation structure and ecosystem function. Additionally, data regarding animal dispersal may be a good way to test how specific landscape features influence landscape connectivity (e.g. impact of different types of roads) or when fragmentation is occurring without vegetation change (e.g. avoidance of humans). In this study, we tested how well fragmentation maps made with and without animal movement data compare within the Greater Maasai Mara Ecosystem, Kenya (34°40′ E, 1°00′ S to 35°50′ E, 1°80′ S). We used global land cover products (ESA CCI, ESA WorldCover, and GLAD Global Land Cover) to quantify fragmentation by anthropogenic land covers. We also created our own site-level land cover maps using unsupervised classification with Landsat-8 and ALOS Palsar-2 imagery. All land cover maps were simplified into binary maps (anthropogenic/natural) at 100 m resolution for comparative purposes. Finally, we used machine learning to make functional fragmentation maps using data from GPS-collared wildebeest (n=15, 50,000 points) and lions (n=8, 68,000 points). Our models were spatially validated by bisecting the study region, with training and testing occurring on the western half and validation on the eastern half. We found that the global land cover products varied widely regarding the degree of fragmentation of the Mara Ecosystem. Habitat area also varied widely with maps indicating as little as 20% or as great as 96% of the area is ‘natural’ habitat. The landscape division index (LDI) ranged from 0.06 for WorldCover to 0.55 for CCI reflecting a large difference in classification of natural habitat. Our site-level land cover classification had a higher landscape division index of 0.66. Conversely, the fragmentation maps based on animal movement data resulted in an LDI of 0.40, but with much less habitat area indicated compared to land cover methods. The high level of disagreement between maps highlights how little we know about land cover and fragmentation in savanna ecosystems. This is especially problematic as savannas cover the largest land areas in the tropics and are expected to undergo further pressure from land use and climate change. Developing better methods for mapping savannas and their connectivity is vital to determining how best to conserve these important ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Back to Black - Harnessing the Spatial Resolution of SDGSAT-1 for Biodiversity Monitoring

Authors: Dominique Weber, PD Dr. Janine Bolliger, Dr. Klaus Ecker, Dr. Claude Fischer, Christian Ginzler, Prof. Dr. Martin M. Gossner, Laurent Huber, Dr. Martin K. Obrist, Dr. Florian Zellweger, Prof. Dr. Noam Levin
Affiliations: Swiss Federal Institute for Forest Snow and Landscape Research WSL, Geneva School of Engineering, Architecture and Landscape – HEPIA, University of Applied Sciences and Arts of Western Switzerland, Department of Environmental Systems Science, Institute of Terrestrial Ecosystems, ETH Zurich, Department of Geography, The Hebrew University of Jerusalem, Earth Observation Research Center, School of Earth and Environmental Sciences, University of Queensland
The rapid increase of light pollution in recent decades has become a global threat to biodiversity at all levels. Artificial light at night (ALAN) affects organisms in terrestrial and aquatic ecosystems in various, often detrimental ways. By disrupting circadian rhythms, ALAN can lead to disturbed physiological processes, but also to altered behaviour of nocturnal species, reducing their foraging ability and increasing predation risk. Despite the mounting evidence of the negative ecological impacts of ALAN, we still lack suitable tools and capabilities for assessing and monitoring ALAN at ecologically relevant scales. Recently, data from a multispectral sensor on-board the Chinese Sustainable Development Goals Science Satellite 1 (SDGSAT-1) have become available. These data provide great improvement in spatial resolution and spectral detail and thus open up new perspectives for ecology and conservation. We review the current contribution of night-time satellites to ecological applications and discuss the potential value of the Glimmer sensor onboard SDGSAT-1 for quantifying ALAN. Due to their coarse spatial resolution and panchromatic nature, currently used data from the DMSP/OLS and VIIRS/DNB space-borne sensors are limited to assess local light pollution and the ecological and conservation-relevant effects of ALAN. SDGSAT-1 now offers new opportunities to map the variability of light intensity and spectra at fine spatial resolutions, providing the means to identify and characterise different sources of ALAN, and to relate ALAN to local parameters and in situ measurements. We demonstrate some key ecological applications of SDGSAT-1, such as assessing habitat quality of protected areas, evaluating wildlife corridors and dark refuges in urban areas, and modelling the visibility of light sources to animals. Monitoring ALAN at 10–40 m spatial resolution enables scientists to better understand the origins and impacts of light pollution on sensitive species and ecosystems, and it assists practitioners in implementing local conservation measures. Our study thus provides new perspectives for sound ecological impact assessment of ALAN and conservation management using space-borne remote sensing. We conclude that SDGSAT-1, and possibly similar future satellite missions, will significantly advance ecological light pollution research to better understand the environmental impacts of light pollution and to devise strategies to mitigate them. However, to boost the use of SDGSAT-1 Glimmer data for science and practice, further research, solving data quality and accessibility issues and ensuring the continuation of the mission are essential. The combination with other remote sensors and in situ measurements is essential to (1) understand and quantify ALAN data delivered by satellites and (2) advance conclusive ecological impact assessment and monitoring of ALAN, for example by upscaling from photometers to UAVs and satellites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Phytoplankton Community Assessment Using Optical Data in the Shallow Eutrophic Lake Võrtsjärv

Authors: Kersti Kangro, Mr. Ian-Andreas Rahn, Mrs. Kai Piirsoo, Mr. Rene Freiberg, Mr. Joel Kuusk, Krista Alikas
Affiliations: Tartu Observatory, University of Tartu, Estonian University of Life Sciences
Lake Võrtsjärv, with a surface area of 270 km² and a maximum depth of 6 meters, is the second largest lake in Estonia. It is turbid and eutrophic lake, where chlorophyll a (Chl a) concentration varies between 3.4–72.2 mg/m³ (median 40.4), total suspended matter between 1.6–52.7 g/m³ (median 16), and absorption by coloured dissolved organic matter at 442 nm between 1.9–8.9 m-¹ (median 2.9)). The variability and changes in in-water parameters can be retrieved from both, Sentinel-3/Ocean and Land Colour Instrument (OLCI) and Sentnel 2/MultiSpectral Instrument (MSI). Phytoplankton, which forms the basis of all aquatic food chains, is crucial for assessing water quality and eutrophication processes. Additionally, phytoplankton plays a significant role in carbon fixation and the carbon cycle in water bodies. The composition of algal pigments indicates the structure of the phytoplankton community and can be estimated through optical observations. Retrieving different types of pigments from phytoplankton absorption is essential for developing further applications. Besides algal pigment composition, microscopic estimation remains relevant as it provides insights into the phytoplankton community at the species level and helps identify potentially toxic genera. In L. Võrtsjärv, the phytoplankton community is primarily dominated by shade-tolerant cyanobacteria species (Limnothrix planktonica and L. redekei), with diatoms appearing in spring and autumn. Phytoplankton biomass and Chl a tend to increase linearly from spring to autumn. The in-water parameters in L. Võrtsjärv are heavily influenced by water levels, which can fluctuate by up to 1.5 meters. Võrtsjärv is a well-studied lake, with continuous research of the phytoplankton community dating back to the 1960s. Phytoplankton absorption measurements are more recent, being measured once a month starting from 2014, but here we focus on the last two years. Initially, optical water typisation by Uudeberg (2020) was applied, which considers features in the reflectance spectra and classifies waters into five optical water types (Clear, Brown, Moderate, Turbid, and Very Turbid). The optical water type for a specific date can be retrieved from HYPSTAR® reflectance data. HYPSTAR is a Hyperspectral Pointable System for Terrestrial and Aquatic Radiometry, providing automated, in-situ multi-angular reflectance measurements of land and water targets, covering the 380-1020 nm spectral range at 3 nm spectral resolution (Kuusk et al, 2024). It has been measuring at the pier of L. Võrtsjärv for two vegetation periods, starting from 2023. Previously, models based on Chl a concentration, Gaussian decomposition, and an inversion model relying on PCA (similar to Zhang et al., 2021) were developed using in situ measured absorption spectra and high-performance liquid chromatography pigment data from 30 small Estonian lakes, with measurements taken three times during the vegetation period. These models will be applied to L. Võrtsjärv data to detect dominant algal groups. These data will serve later as input for models derived during the AQUATIME project. The AQUATIME project aims to enhance applications for Ecosystem and Biodiversity Monitoring, Inland Water Management, and Coastal Management, focusing on novel possibilities for phytoplankton monitoring. The hyperspectral capabilities of ESA’s planned hyperspectral CHIME will provide more detailed information about phytoplankton parameters and allow for more specific products compared to Sentinel-3/OLCI. References: Kuusk J., Corizzi A., Doxaran D., Duong K., Flight K., Kivastik J., Laizans K., Leymarie E., Muru S., Penkerc’h C., Ruddick K. 2024. HYPSTAR: a hyperspectral pointable system for terrestrial and aquatic radiometry. Frontiers Remote Sens. 5. https://doi.org/10.3389/frsen.2024.1347507 Uudeberg, K. 2020. Optical Water Type Guided Approach to Estimate Water Quality in Inland and Coastal Waters. Dissertationes physicae Universitatis Tartuensis, 124. 67 pg. https://dspace.ut.ee/handle/10062/67338 Zhang, Y., Wang, G., Sathyendranath, S., Xu, W., Xiao, Y., Jiang, L. 2021. Retrieval of Phytoplankton Pigment Composition from Their In Vivo Absorption Spectra. Remote Sens. 13, 5112. https://doi.org/10.3390/rs13245112
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: BIOMONDO - Towards Earth Observation supported monitoring of freshwater biodiversity

Authors: Petra Philipson, Carsten Brockmann, Miguel Dionisio Pires, Marieke Eleveld, Niklas Hahn, Jelle Lever, Daniel Odermatt, Aafke Schipper, Jorrit Scholze, Kerstin Stelzer, Susanne Thulin, Tineke Troost
Affiliations: Brockmann Geomatics Sweden AB, Brockmann Consult GmbH, Deltares, PBL Netherlands Environmental Assessment Agency, Eawag, Swiss Federal Institute of Aquatic Science and Technology
The European Space Agency activity called Biodiversity+ Precursors is a contribution to the joint ESA and European Commission Flagship Action on Biodiversity and Vulnerable Ecosystems, launched in February 2020, to advance Earth System Science and its response to the global challenges that society is facing. The Precursor BIOMONDO was focused on biodiversity in freshwater ecosystems. The project developments were based on an analysis of the major knowledge gaps and science questions on biodiversity and vulnerable ecosystems, an assessment of how recent and future Earth Observation systems can help address these scientific challenges in biodiversity knowledge and a demonstration of the Earth System Science approaches with a number of pilot studies called Earth System Science Pilots for Biodiversity. The project finalized with the development of a Science Agenda and a scientific roadmap, serving as a basis for the implementation phase of the EC-ESA actions to further increase global Earth Observation supported monitoring of biodiversity. Based on an in-depth-analysis of the relevant sources for scientific and policy priorities, the main knowledge gaps and challenges in freshwater biodiversity monitoring were identified. The findings were used in BIOMONDO to develop three pilot studies that integrate Earth Observation data and biodiversity modelling using advanced data science and information and communications technology. Each pilot addressed objectives and knowledge gaps corresponding to one of the following drivers of global environmental change in freshwater ecosystems: pollution and nutrient enrichment (Pilot 1), climate change (Pilot 2), and habitat change (Pilot 3). More specifically, in pilot 1 we have explored the opportunity to upgrade ecosystem modelling by integrating EO data into Delft3D. Delft3D is a 3D modelling suite to investigate hydrodynamics, sediment transport and morphology, and water quality for fluvial, estuarine and coastal environments. In pilot 2 we explored the use of Earth Observation based water temperature to quantify the impacts of increases in temperature and heat waves on freshwater fish diversity. In this pilot we used a novel phylogenetic heat tolerance model, created by PBL as part of the GLOBIO model suite, which estimates thermal tolerance of freshwater fish species. In pilot 3 we combined Earth Observation data and the modelled degree of geographic range fragmentation, expressed as a connectivity index, for monitoring and assessing the impact of dam construction and removal on biodiversity, including the effects on habitat fragmentation and water quality. The pilot studies were implemented and validated for selected sites to showcase the applicability and impact for science and policy. The generated products constitute the so called BIOMONDO Experimental Dataset, and the results have been presented, assessed, and discussed with external stakeholders. The Experimental Datasets were gathered in the BIOMONDO Freshwater Laboratory. Central to this Lab is federation of all data on common grids (a data cube). Analysis and processing functions for state-of-the-art methods, as well as visualisation interfaces and export functions are available in the Lab. The external stakeholders were given access to the novel Earth Observation products and model results through the Lab and supported the validation and evaluation of scientific impact and policy benefit of the developments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Seasonal Patterns of Local and Regional Plant Biodiversity Observed from Hyperspectral Airborne Imagery

Authors: Elisa Van Cleemput, Kristen Lewers, Benjamim Poulter, Bryce Currey, Peter Adler, Katharine Suding, Laura Dee
Affiliations: Leiden University College, University of Colorado Boulder, NASA Goddard Space Flight Center, Montana State University, Utah State University
Mapping and monitoring biodiversity are essential activities that form the foundation of effective biological conservation efforts. Hyperspectral sensors on airborne and spaceborne platforms show great promise to contribute to this endeavor. Indeed, because of their high spectral resolution, hyperspectral data has been uniquely capable of characterizing various aspects of what terrestrial biodiversity entails, including morphological, biochemical and phenological vegetation properties. The premise of hyperspectral sensors is that they cannot only be used to visualize spatial biodiversity patterns, but also allow researchers and practitioners to study biodiversity changes over time, including tracking progress towards restoration goals and detecting responses to disturbance and stress. To effectively support these applications, it is essential to consider their required spatial and temporal resolutions, as different measurement platforms involve significant trade-offs among spectral, spatial, and temporal capabilities. Local diversity is not necessarily correlated with regional diversity and they may therefore play different roles in supporting ecosystem functions; phenology may impact both local and regional biodiversity. In this study, we employed airborne imagery of the Surface Biology and Geology High-Frequency Time Series (SHIFT) campaign to explore the scale-and time-dependency of (spectral) diversity. Using the AVIRIS-NG (Airborne Visible/Infrared Imaging Spectrometer-Next Generation) instrument, the SHIFT campaign collected hyperspectral imagery over the Jack and Laura Dangermond Preserve (JLDP), on an approximately weekly basis from late February to late May with a spatial resolution of ~ 5 m. The JLDP hosts various ecosystems; we focused on the Coast Live Oak Woodlands ecosystem, which consists of relatively open savannas and dense closed-canopy old-growth forests. We hypothesized that this ecosystem had relatively lower local (alpha) spectral diversity (e.g., relatively homogenous plain grasslands), compared to regional (beta) diversity. (i.e., turnover from more grassy to more woody vegetation). Additionally, as grasses senesce over the season, we hypothesized that the contribution of spectral alpha-diversity to total diversity would decrease (e.g., it becomes harder to distinguish species in a senesced grassland), and that of spectral beta-diversity would increase (larger difference between senesced grassland and evergreen forested locations in the landscape). To partition spectral diversity into local (alpha) and regional (beta) diversity, we used the approach developed by Laliberté et al. (2020), where alpha-diversity was calculated as the specral dissimilarity between all pixels in a 30 m x 30 m community, and beta-diversity was calculated as the spectral dissimilarity between those communities. The community size was chosen to correspond with the typical spatial resolution of hyperspectral satellite sensors. We applied this algorithm to all images across the season, to obtain a time series of spectral diversity information. In line with our expectations, beta-diversity contributed indeed more to overall diversity, than alpha-diversity: Across the season beta-diversity was responsible for 65-75% of the total diversity. As the season progressed this contribution decreased, as hypothesized. We observed a peak in spectral alpha-diversity in the beginning of March, which corresponds with peak flowering. The observation that beta-diversity contributed more to overall diversity than alpha-diversity in this ecosystem is promising for the use of hyperspectral satellite sensors that measure Earth’s surface with a spatial resolution that is similar to the size of communities in this study. Important seasonal biodiversity patterns may however not be picked up by hyperspectral satellite sensors, as they typically have longer revisit times. Follow-up research needs to clarify whether similar patterns are present in other ecosystem types (e.g., grasslands and chaparral in the JLDP). In conclusion, this study sheds light on expectations and measurement objectives of current and future spectroscopy missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Semi-supervised object-based classification of coastal dune vegetation covers in the SW Spain using Sentinel-2 imagery

Authors: Diego López-Nieta, Víctor F. Rodríguez-Galiano, Emilia Guisado-Pintado, Eva Romero-Chaves
Affiliations: Department of Physical Geography and Regional Geographic Analysis, Faculty of Geography and History, University of Sevilla (Spain)
Coastal dunes ecosystems play a crucial role in climate change adaptation, acting as a natural barrier against flooding and being reservoirs of biodiversity. Lately, human activities have thread their integrity jeopardising their important function. Therefore, monitoring changes in coastal dunes has become an essential task in managing coastal areas. In this context, the use of satellite imagery and machine learning algorithms provides a powerful tool to delineate, differentiate and accurately map theses ecosystems. This contribution is focused on a preliminary study to assess the feasibility of a semi-supervised object-oriented classification methodology using high-resolution satellite imagery from Sentinel-2 mission and machine learning algorithms to map four coastal dune systems in SW Spain (Zahara de los Atunes, Bolonia, Valdevaqueros and Los Lances). Sentinel-2 was chosen because of its easy accessibility, high periodicity (5 days) and adequate spatial resolution (10 m). All this makes this Copernicus mission a valuable source of information for large-scale monitoring studies. However, most previous studies have been carried out using very high-resolution imagery, making it difficult to adequately monitor these ecosystems over time. Additionally, this work considers objects or segments instead of pixels, incorporating the spatial context and providing new insights into the study of dune ecosystems as complex landscapes. The dataset included a 2017 Sentinel-2 annual composite of the VIS, NIR and SWIR bands, resampled to 10 m, seasonal NDVI composites from the same year and texture variables derived from the Sentinel-2 annual composite. The methodology followed a semi-supervised object-oriented approach for mapping coastal dune environments, aiming to establish a basis for long-term monitoring of these ecosystems. The Multiresolution Segmentation (MRS) algorithm was used to group pixels representing homogeneous territorial units, allowing a more accurate representation of the geospatial characteristics of these ecosystems. To achieve an optimal segmentation, the algorithm's parameters were fine-tuned using the ESP2 algorithm developed by Drăguţ et al. (2014). Sentinel-2 bands were used as input for the MRS process. Seventy-five percent of the resulting segments/objects served as training data for a supervised classification, while 25% were reserved as an independent test. The latter subset was labelled using photointerpretation techniques from digital aerial orthophotographs. Training data were labelled using the automatic K-means clustering method, grouping the segments into classes (environments), determining the optimal number of groups by applying the Elbow method. Seasonal NDVI composites and the SWIR band were used as inputs for the K-means algorithm. The spectral distance of each segment with respect to the centroid of the nearest class was calculated, grouping segments into different training subsets based on distance percentiles. The representativeness of these training subsets was evaluated using the Random Forest (RF) algorithm, considering the trade-off between accuracy and generalisability of the model for each class, since coastal dune environments are diverse, highly complex and dynamic systems. The best-performing combination of training subsets was then used as the final training set for the RF classification. Finally, the results were validated by applying the model to the photo-interpreted independent test set. The variables used for the RF-supervised models included the seasonal composites of NDVI, SWIR band and texture variables. MRS and K-means results allowed the identification of four environments: areas with sparse or no vegetation (ASV), areas with herbaceous vegetation (AHV), areas with arboreal vegetation (AWV), and areas with mixed vegetation (combination of the above; AMV). The optimal training data were set at spectral distance thresholds of 30 for ASV, 70 for AWV and 100 for AHV and AMV. The overall accuracy of the final model showed a strong value of 0.86 and good agreement with the predictions, with a Kappa coefficient of 0.8. These results show the effectiveness of a semi-supervised method, achieving a stratification of the territory that facilitates the subsequent classification of coastal dune environments. Furthermore, this approach employs an operational sensor that is highly updatable (every 5 days) and with an adequate spatial resolution. Consequently, the next step will be to replicate this methodology across other coastal dune systems to ensure its applicability and effectiveness in future coastal monitoring studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Space to Land: exploiting satellite-derived water quality variables for climate studies

Authors: Monica Pinardi, Rossana Caroni, Anna Joelle Greife, Mariano Bresciani, Claudia Giardino, Laura Carrea, Xiaohan Liu, Stefan Simis, Clement Albergel
Affiliations: Institute for Electromagnetic Sensing of the Environment, National Research Council, CNR - IREA, Department of Meteorology, University of Reading, Reading, Plymouth Marine Laboratory, European Space Agency Climate Office, NBFC, National Biodiversity Future Center, Department of Environmental Science and Policy, University of Milan
Despite making up less than 1% of the world water area, lakes are an important resource that provide drinking water, biodiversity, and recreational opportunities—all of which are tied to sustainable development goals. Increasing urbanisation and population growth have led to eutrophication, hydrological changes and loss of ecosystem services. Invasive species, land use changes and climate change are recognized as the main drivers of species loss in freshwater environments, which may be five times faster than in terrestrial environments. In the coming decades, climate change and global warming, and in particular the increase in extreme weather events, are expected to have more widespread and significant impacts on biodiversity, species composition, hydrology, land cover and nutrient cycling. Using in-situ data to monitor such water bodies and comprehend their complex behavioral changes on a global scale is not feasible. Open access satellite-derived data represent a way forward in understanding the ecological processes and in assessing the impact of the main drivers of changes on freshwaters. The Lakes_cci (Climate Change Initiative) project provides global and consistent satellite observations of lake specific essential climate variables (ECVs): Lake Water Level and Extent, Surface Water Temperature, Ice Cover and Water-Leaving Reflectance (LWLR), which capture both the physical state of lakes and their biogeochemical response to physico-chemical and climatic forcing. With the release of version 2.1, the products cover the period 1992-2022 and provide daily data at 1 km resolution for over 2000 relatively large lakes. The project has explored multiple use cases that examine long-term time series of biophysical water quality parameters to understand possible causes of their trends, including the unique response of shallow lakes globally and the effects of heatwaves on lakes. In the first use case, we selected a globally distributed subset of shallow lakes (mean depth < 3m; n=347) to investigate long-term time series/trends (2002-2020) of chlorophyll-a (Chl-a) and turbidity derived from LWLR. Shallow lakes and wetlands form a major component of inland waters and provide many ecological services, particularly important for carbon storage and biodiversity. Due to their large surface-to-volume ratio, they are vulnerable to environmental changes influenced by nutrient and pollutant loads and are sensitive to climate change. According to the trend analysis, turbidity increased significantly in 60% of the shallow lakes and decreased in 17% of the lakes, while Chl-a increased significantly in 45% of the lakes and decreased in 22% of the lakes. Further investigation revealed that in most lakes the parameters turbidity (50%) and Chl-a (48%) increased simultaneously with LSWT, suggesting an impact of climate warming on lake water quality. Chl-a and turbidity in most lakes increased positively with population and gross regional product, according to a structural equation model-based analysis used to model the interactions between climatic, socio-economic and total water conditions. This finding suggests that human population growth in a lake’s catchment represents an important pressure on lake water quality. In the second use case, exploiting the high frequency and coverage of observations, we were able to gain insight into the response of lakes to sequential extreme weather events, such as heat waves and monsoon rainfall events, that occurred in India in 2019. Indian lakes are an excellent test subset across Lakes_cci variables as monsoon dynamics require that lake turbidity, chlorophyll-a, LWL and climatic variables are considered together at seasonal and annual scale. We examined the water quality response using time series and TAM (Time Alignment Measurement) analysis, which measures the degree of synchrony with the heatwave event, followed by cluster analysis of Chl-a and turbidity patterns. The TAM analysis showed that the rainfall time series was closer in phase with air temperature and turbidity, but less so with Chl-a, indicating a driving influence of rainfall on turbidity, probably due to the strong influence of the monsoon. The available LWL data showed high variability over a short period of time. Cluster analysis revealed two main groups of turbidity patterns: northern lakes showed peaks most likely driven by spring snowmelt, while southern lakes were dominated by peaks driven by the summer monsoon. In contrast, Chl-a patterns were less related to hydro-morphology, and likely more influenced by local nutrient dynamics and changes in LWL, helping to focus further studies centred on individual lakes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From High-Resolution Land Use/Land Cover Mapping to EUNIS Habitat Predictions: Advancing Protected Area Management through Remote Sensing and AI

Authors: José Manuel Álvarez-Martínez, Borja Jiménez-Alfaro, Javier Becerra, Carlos De Waissage, Justine Hugé, Noemi Marsico, Dimitri Papadakis, Alberto Martín, Adrián Sujar-Cost, Ana Sousa
Affiliations: CENTRO DE OBSERVACIÓN Y TELEDETECCIÓN ESPACIAL SAU, BIODIVERSITY RESEARCH INSTITUTE (UNIVERSITY OF OVIEDO-CSIC-P.ASTURIAS), COLLECTE LOCALISATION SATELLITES (CLS), EVENFLOW, EUROPEAN ENVIRONMENT AGENCY (EEA)
Monitoring land use and land cover (LULC) is indispensable for assessing biophysical variables, understanding human-environment interactions and addressing biodiversity loss and climate change. Protected Areas (PAs) are keystones of conservation strategies, serving as refugia for species and ecosystems. Effective management of PAs requires precise, up-to-date LULC information and tools to analyze biodiversity change. The Copernicus Land Monitoring Service (CLMS), through products like CORINE Land Cover and Priority Area Monitoring, delivers ready-to-use LULC datasets. However, significant challenges remain in translating these products into EUNIS habitat types to align with Annex I typologies and Article 17 of the Habitats Directive. This study presents an integrated framework that bridges high-resolution LULC mapping with EUNIS habitat predictions, providing a foundation for biodiversity assessment, ecosystem restoration and Nature-Based Solutions (NbS). Using the CLMS PA product as a foundational layer, we can develop hierarchical habitat models within EEA biogeographical regions. This approach ensures robust habitat predictions, informed by Sentinel-2 time-series imagery processed through advanced Artificial Intelligence (AI) algorithms and super-resolution techniques, offering unparalleled spatial and temporal granularity. Our methodological framework is founded on stratified sampling design, leveraging existing LULC classifications to identify representative ecological gradients and management regimes. This design minimizes biases and enhances field campaign efficiency, fostering comprehensive habitat mapping. In situ data on EUNIS habitat types plays a dual role in validating LULC products and enhancing habitat models by linking field observations with spectro-phenological signatures derived from super-resolved Sentinel-2 imagery. AI-driven spatial modeling integrates multivariate predictors—super-resolution spectral indices, texture metrics and time-series features—with ancillary datasets (e.g. topography, soil, climate and hydrology). This integration captures key ecological features, such as managed versus natural grasslands, diverse forest types, wetlands and small-scale landscape elements critical for biodiversity assessments and reporting obligation, enabling habitat-specific predictions and addressing region-specific ecological variability. Finally, locally tailored post-processing workflows ensure model outputs meet the rigorous requirements of the Habitats Directive, delivering scientifically robust and operationally relevant products for conservation planning. Our results demonstrate >90% accuracy at Level 1 LULC classifications, with promising outcomes for finer habitat typologies. By integrating super-resolved Sentinel-2 imagery into scalable workflows, we provide a pathway for pan-European habitat mapping, supporting compliance with Article 17 and the goals of the EU Biodiversity Strategy and European Green Deal. This work exemplifies the potential of combining remote sensing, AI, and field data to advance terrestrial biodiversity monitoring and NbS implementation. It emphasizes the role of CLMS PAs product as laboratories for ecological innovation, enabling the translation of LULC products into actionable insights for ecosystem mapping, adaptive management and enhanced conservation strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: BioBalance: A Comprehensive Indicator to Quantify Anthropogenic Impacts on Biodiversity

Authors: Théau Degroote, Adrien Gavazzi, Fabien Castel
Affiliations: Murmuration
1. Introduction and Context Human activities are among the most significant drivers of biodiversity loss, necessitating the development of tools to quantify and mitigate their impact. While traditional indicators often rely on species surveys, their effectiveness is limited by inconsistent spatial and temporal coverage. To overcome these challenges, BioBalance leverages the GLOBIO methodology, a well-established framework for modeling biodiversity loss driven by human pressures. By combining this methodology with Earth Observation (EO) data, BioBalance offers a scalable, reliable, and science-based approach to assessing biodiversity integrity. 2. Presentation of BioBalance BioBalance quantifies the human impact on biodiversity using the Mean Species Abundance (MSA) index, which measures biodiversity health on a scale from 0 (complete loss) to 1 (intact biodiversity). The indicator accounts for six major anthropogenic pressures: 1. Land use 2. Climate change 3. Nitrogen deposition 4. Human encroachment 5. Habitat fragmentation 6. Road disturbance Each anthropogenic pressure assessed by BioBalance generates an individual MSA. These scores reflect the impact of a specific pressure on biodiversity within a given area. To calculate the overall MSA for a given site, the individual MSA values are combined multiplicatively. To ensure accuracy and scalability, BioBalance combines high-resolution datasets such as: - CORINE Land Cover and ESA WorldCover for land use, - ECMWF COPERNICUS for climate data, - OpenStreetMap (OSM) for infrastructure mapping, - World Database on Protected Areas (WDPA) for conservation zones, - EMEP/MSC-W modelled air concentrations for nitrogen deposits. By combining these datasets, BioBalance delivers spatially explicit and temporally consistent assessments of biodiversity pressures. Crucially, these results are made accessible through interactive dashboards, which present clear, actionable insights. Policymakers, environmental planners, and conservation organizations can use these dashboards to visualize MSA. 3. Methodology: Assessing Key Anthropogenic Pressures 3.1. Land Use Land use is the most impactful factor influencing biodiversity loss. BioBalance categorizes land use into 13 types, based on the GLOBIO framework. These are grouped into natural areas (e.g., forests) and human-dominated zones (e.g., croplands, urban areas). The significance of land use is so dominant that if a region is classified as urban or agricultural, other pressures are not considered. It’s worth noting that one-third of Earth's terrestrial area is currently used as cropland or pastureland. 3.2. Climate Change The impact of climate change on biodiversity is calculated based on the increase in temperature and its effects across various biomes. Using scientific studies, BioBalance determines the relationship between warming and biodiversity conservation within each biome, ensuring a global and biome-specific approach to assessing this pressure. 3.3 Nitrogen Deposition Nitrogen deposition measures the excess nitrogen that surpasses the critical load the ecosystem’s capacity to absorb nitrogen without adverse effects. Critical loads are determined based on the vegetation type specific to each biome. Observational data on nitrogen impacts are used to calculate the MSA for different ecosystems. This ensures that only nitrogen levels beyond the ecosystem's resilience threshold are considered in the pressure calculation. 3.4. Human Encroachment Human encroachment reflects the impact of human activities (e.g., hunting, food and fuel collection, tourism) on animal biodiversity in areas that would otherwise remain natural. Encroachment is assumed to occur within 10 km of urban areas or croplands. Research indicates that encroachment within this zone affects only one-third of the animal and plant population. Simulations suggest that even a small urban or cropland proportion (1.5% within a 50x50km grid cell) is sufficient to influence biodiversity across the entire grid. 3.5. Habitat Fragmentation Habitat fragmentation is primarily caused by major roads (highways, primary, and secondary roads). Other smaller infrastructure types are considered to have negligible effects. By merging road maps with land use maps, BioBalance identifies the largest intact habitat patch. The size of the largest patch determines the MSA value, based on scientific measurements correlating patch size with biodiversity health. 3.6. Road Disturbance Road disturbance quantifies the impact of infrastructure proximity on animal biodiversity. BioBalance distinguishes between five types of roads (highways, primary, secondary, tertiary, residential) and also includes other infrastructure such as railways, power lines, and mines. The impact zone for roads is defined as a 1 km radius around the infrastructure. Studies on mammals and birds indicate the impact of road disturbance for areas within 1 km of a road. In protected areas, the impact is mitigated, and the MSA is adjusted. 4. Objectives and Usefulness BioBalance aims to empower decision-makers with a practical and user-friendly tool for biodiversity management. By offering interactive dashboards, BioBalance enables users to: Identify priority areas for conservation efforts, Analyze the most significant anthropogenic pressures on biodiversity, The interactive dashboards make biodiversity data accessible, intuitive, and actionable. BioBalance also complements other indicators developed by the company, such as tools for air quality and vegetation health, offering a comprehensive approach to ecosystem management. 5. Deployment and Practical Applications BioBalance has already been deployed across various regions and ecosystems, demonstrating its versatility and practical value. In Occitanie (France), the tool was used to analyze regional biodiversity pressures. In regional natural parks such as the Haut-Jura (France) and Peneda-Gerês (Portugal), BioBalance has been instrumental in identifying priority areas for biodiversity conservation. The indicator has also been applied in urban contexts across multiple cities in Turkey and Thailand, giving a better understanding of the impact of urban expansion and infrastructure development on biodiversity. By highlighting priority zones for conservation, BioBalance supports stakeholders in making informed decisions to mitigate biodiversity loss while balancing development needs. Its adaptability across different scales and contexts underscores its potential as a key tool for environmental planning. 6. Conclusion BioBalance bridges the gap between scientific biodiversity assessment and actionable policy implementation. Its robust methodology, combined with interactive dashboards, enables stakeholders to tackle biodiversity loss effectively. By supporting evidence-based conservation strategies, BioBalance contributes to the development of sustainable, targeted approaches for preserving global biodiversity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite Images for High-Resolution Species Distribution Models

Authors: Johannes Dollinger, Dr. Philipp Brun, Dr. Vivien Sainte Fare Garnot, Dr. Damien Robert, Dr. Lukas Drees, Professor Jan Dirk Wegner
Affiliations: Department of Mathematical Modeling and Machine Learning (DM3L), University of Zurich, Swiss Federal Research Institute WSL
Species distribution modeling (SDM) concerns itself with creating species distribution maps to analyze the shifting distributions of species under changing environmental conditions. These maps support decision makers in their selection of areas to put under protected status, planning land use and tracking invasive species. What makes this task challenging is the great variety of controlling factors for species distribution, such as habitat conditions, human intervention, competition, disturbances, and evolutionary history. Experts either incorporate these factors into complex mechanistic models based on presence-absence (PA) data collected in field campaigns or train machine learning models to learn the relationship between environmental data and presence-only (PO) species occurrence. Due to a sharp increase in available occurrence data from crowd-sourcing efforts, it has become viable to model thousands of species jointly. Using large amounts of crowd-sourced data comes with a trade-off in terms of bias, both spatially around inhabited areas, and sampling-wise in favor of charismatic species. This work uses plant data at the European scale. Currently, these presence-only occurrences are heavily biased towards cities in western Europe, with a strong class imbalance ranging from 1 to 4500 observations. Satellite imagery is a unique modality that can help dealing with the noisy PO data due to its high resolution. Most environmental modalities such as climatic, soil and human footprint data are restricted to a 1 km grid, while the publicly available Sentinel-2 images have a resolution of 10 meters. Sentinel-2 is specifically well adapted to the task with its bands chosen to provide a plethora of information on flora. A 10 meter resolution is not enough to identify the individual plants, but contains information on small-scale local structures, such as agriculture, forests, proximity to bodies of water and streets, that are not covered by other modalities. Modeling distributions of PO data using satellite images is promising, but still underexplored. Spatial Implicit Neural Representations (SINR) has shown that jointly learning many species from PO data provides a useful embedding space that allows for the generalization to species with few samples, but the authors constrain themselves to mapping the species distributions primarily based on location only. We discuss an extension of SINR with Sentinel-2 images dubbed Sat-SINR, jointly modeling the spatial distributions of 5.6k plant species across Europe. This model achieves an improvement of up to 3 percentage points on micro F1 and ROC-AUC compared to logistic regression and SINR. Additionally, the resulting maps show qualitative differences such as recognizing viable habitats in under-sampled areas. We furthermore dive deeper into understanding the information that the model retrieves from satellite images.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: European Biodiversity Partnership (Biodiversa+) harmonizing trans-national long-term biodiversity monitoring

Authors: Petteri Vihervaara, Toke Hoye, Michele Bresadola, Aino Lipsanen, Iiris Kallajoki, Cécile Mandon, Julia Seeber, Jamie Alison, Helene Blasbichler, Risto Heikkinen, Sara Wiman, Gaëlle Legras, Pierre Thiriet, Mathieu Basille, Mona Naeslund, Michelle Silva del Pozo, Alberto Basset, Senem Onen Tarantini, Martina Pulieri, Lluís Brotons, Gloria Casabella, Magdalena Henry, Constantinos Phanis, Rob Hendriks, Guillaume Body, Sophie Germann, Ron
Affiliations: Finnish Environment Institute
European Biodiversity Partnership (Biodiversa+) has been supporting transnational long-term biodiversity monitoring across Europe. Biodiversa+ was launched October 2021 and will last until October 2028. We will summarize the key achievements of the first years of cooperation (years 2021-2025) on biodiversity monitoring of over 20 countries participating in these activities. To support harmonization of biodiversity monitoring schemes, we have started six pilot projects that will test novel monitoring methods, such as remote sensing, environmental DNA, and automated sound and image recognition to improve biodiversity monitoring, and to provide deeper understanding of possibilities and challenges of extending them to true long-term monitoring schemes. Currently, we have been piloting monitoring of i) invasive alien species, ii) soil biodiversity, iii) moths, bats, and birds, iv) rocky-reef fish, and 5) grassland and wetland habitats. In addition, in the sixth pilot, we have been assessing governance aspects of national biodiversity monitoring coordination in ten countries. The budget of these pilots has been 8.3M€ so far, and 19 participating organisations from 18 countries have participated in the pilots. Besides of concrete collection of monitoring data, we have also developed transversal activities such as data management and interoperability as well as integrating the observational data into decision-making which has been demonstrated also. As part of implementation of novel monitoring methods, a roadmap for utilization of those novel methods have been conducted. We have also studied current funding spent on biodiversity monitoring as well as expected costs of upscaling piloted monitoring schemes to become true long-term monitoring programmes. We will highlight possibilities and challenges learned from these pilots. One of the main benefits of such transnational monitoring schemes is that they can provide calibration and validation data for Earth Observation biodiversity data products and remotely sensed data sets. Finally, possibilities to integrate in situ observations with remote sensing approaches will be discussed. We will also present a vision for the forthcoming biodiversity monitoring cooperation with Biodiversa+ and other key monitoring initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Upland Habitat Mapping Using High-resolution Satellite Imagery and Machine Learning

Authors: Dr Charmaine Cruz, Dr Philip Perrin, Dr Jerome O'Connell, Dr James Martin, Dr John Connolly, Marco Girardello
Affiliations: Trinity College Dublin, Botanical, Environmental & Conservation (BEC) Consultants Ltd, Proveye Ltd
Uplands comprise a range of extensive, mostly semi-natural habitats, including blanket bogs, heaths, fens, grasslands and those associated with exposed rocks and scree. These habitats are protected in the European Union under the Habitats Directive. They provide important services, such as carbon sequestration and storage, biodiversity support, flood mitigation and water quality regulation. However, they are also highly vulnerable to climate change and to increasing pressures and threats from anthropogenic stressors, mainly by land-use changes. Comprehensive mapping is fundamental for monitoring these vulnerable habitats as it can provide baseline data, such as the location and extent of habitats, and can be used to monitor and track their condition over time as well as to support restoration programmes. Remote sensing has emerged as a valuable tool for mapping the distribution of upland habitats over time and space. However, the resolution of most freely available satellite images, such as Sentinel-2’s 10-meter resolution, may be inadequate for identifying relatively small features, especially in the heterogeneous landscape—in terms of habitat composition—of uplands. Moreover, the use of traditional remote sensing methods, imposing discrete boundaries between habitats, may not accurately represent upland habitats as they often occur in mosaics and merge with each other. In this context, we used high-resolution (2 m) Pleiades satellite imagery and Random Forest machine learning to map habitats at two Irish upland sites. Specifically, we investigated the impact of varying spatial resolutions (i.e., by resampling from the original 2-m spatial resolution to 4-, 6-, 8- and 10-m resolutions) on classification accuracy and proposed a complementary approach to traditional methods for mapping complex upland habitats. Results showed that the accuracy generally improved with finer spatial resolution data, with the highest accuracy values (80.34% and 79.64%) achieved for both sites using the 2-m resolution datasets, followed by the 4-m resolution datasets with an accuracy of 77-79%. The maps produced from these datasets provide information on the spatial distribution of habitats in great detail. Coarser spatial resolution datasets, however, resulted in a reduction of the accuracy and a slight overestimation of area for narrow and small-sized habitats (e.g., eroding blanket bogs and bog pools). The total percentage area differences between the 2-m and 10-m resolution images are 8% and 11% for the two studied sites. Although these differences may appear small, they could have significant implications in monitoring small-sized habitats, particularly when tracking gradual and subtle changes in these habitats over time. Therefore, a higher spatial resolution dataset is preferred if mapping habitats in a more heterogeneous and diverse landscape. The study also demonstrated the use of crisp and fuzzy classification techniques in mapping upland habitats. Crisp classification results in a single habitat map, which is relatively easy to interpret. Fuzzy classification delivers probability maps for each habitat considered in the modelling. While these probability maps may be more difficult to interpret, they can represent the typical complex mosaics and gradual transitions of upland habitats as observed in the field. They can also be used to describe spatial confidence in the classification through computing the entropy. Using fuzzy classified maps has the potential to improve our understanding of nature’s fuzzy patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Pixels to Paths: Animal Path Mapping using UAVs and a Deep Convolutional Neural Network - Insights from the Kruger National Park

Authors: Konstantin Müller, Leonie Sonntag, Dr Mirjana Bevanda, Antonio Jose Castañeda‐Gomez, Dr Martin Wegmann, Dr Benjamin Wigley, Dr Corli Coetsee, Dr Doris Klein, Univ.-Prof. Dr. Stefan Dech, Jakob Schwalb-Willmann
Affiliations: Earth Observation Research Cluster (EORC), Department of Remote Sensing, Scientific Services, Kruger National Park - SANParks, School of Natural Resource Management, Nelson Mandela University, Plant Ecology, University of Bayreuth, German Remote Sensing Data Center (DFD) of the German Aerospace Center (DLR)
Animals play a vital role in and for ecosystems. Information about their behavior is crucial for understanding the health and state of the surrounding environment. Their small- and large-scale movements leave traces visible as individual paths or resting sites in the landscape. With our approach, we utilize mono-temporal UAV RGB imagery to automatically map animal paths. This resulting path network gives detailed insights into animal behavior in interaction with the environment across individuals, groups and different species for subsequent biodiversity analysis. Being able to map animal paths continuously, we complement existing, temporally discrete approaches of tracking individually tagged animals via GPS tags, VHF triangulation or ringing. In contrast to current UAV video-based techniques to follow and record animals, our focus on the animals’ paths throughout the landscape is non-invasive, scalable and temporally independent from animal presence. In this study, we developed (i) a semi-automatic path labeling approach for different path types and (ii) trained a Convolutional Neural Network (CNN) to segment animal paths in UAV RGB and photogrammetrically derived DSM imagery. For this, we mapped a research area in the Kruger National Park, South Africa, for which the understanding of animal behavior plays a key role for biodiversity preservation and conservation management. In this regard, animal paths play an important tole for understanding habitat use and the effects of changing environments. Thus, we investigate how animal path patterns relate these patterns to potential influences on animal behavior such as other phenomena of not only ecological related incidents (e.g. food and freshwater availability, shelter, predator pressure, and human activities. Building upon other well-known line delineation tasks, e.g., road segmentation, this research explores the ability of CNNs, especially encoder-decoder-based architectures, to map animal paths from UAV data. The project is guided by three research questions regarding (1) the impact of ground truth data generation on segmentation accuracy, (2) the contributions of network enhancements to improve segmentation, and (3) the generalizability of the model to different natural environments. We found that CNNs can well segment animal paths in different conditions, ranging from clear paths to partly grown over or strongly vegetated path. Even in difficult scenarios, our network can detect the direction of paths correctly. Furthermore, we show that refining manually labeled paths using our semi-automatic dynamic path width estimation approach increases segmentation performance. In addition, our framework is transferable across different soil types and scales. We show that our architecture outscores existing, off-the-shelf architectures by over 7% in in prediction accuracy through enhancements such as attention modules and the inclusion of denser connections in our architecture. With our approach to automatically map animal paths from UAV RGB imagery, we offer a new method to uncover animal path networks created by many individuals across different species while reducing disturbances of animals that could potentially alter their behavior. This new opportunities for biodiversity research, e.g. by using animal path networks as predictor in modelling structural biodiversity of ecosystems, in species distribution modelling or for classifying habitat types. Moreover, we see potentials for combining conventional tracking techniques such as GPS with our mapping approach, e.g. to associate path characteristics with species data or capture types of movement behavior, enhancing the data foundation for ecology research and beyond.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Modelling the Role of Multiple Global Change Drivers on Future Range Shifts in a Tropical Biodiversity Hotspot

Authors: Emma Underwood, Prof Nigel Walford, Prof Mark Mulligan, Dr Kerry Brown
Affiliations: Kingston University London, King's College London
Climate change is causing plants to alter their known ranges to track newly suitable habitat. Species extinction risk due to global and local change drivers is highest on geographically isolated islands with high endemism such as Madagascar. For plants like Calophyllum paniculatum (C. paniculatum), they have a double threat, after high mortality rates were discovered linked to a newly identified vascular wilt like pathogen (Wright et al., 2020). We modelled C. paniculatum under multiple future climate, land cover, dispersal, and pathogen-spread scenarios through a combination of correlative and mechanistic approaches to disentangle the driving forces of range shift into the future, at national and regional scales. For mechanistic models, we parameterised scenarios using locally collected lemur dispersal data and a population-specific mortality probability from five consecutive years of spatially explicit monitoring of C. paniculatum tree health in Ranomafana National Park. Dispersal distance was parameterised from multi-year behaviour and movement observations of lemur species Eulemur rubriventer present in the Park (Tonos et al., 2022). Initial results suggest range shift becomes increasingly limited when utilising dispersal parameters from field collected data, as they do not account for less common, long distance dispersal events. With current rates of pathogen spread alone, local populations of C. paniculatum may not be able to sustain themselves within the measured time-period without intervention or support. Localised change drivers such as fragmentation of forest edges, with the additional increased mortality due to pathogen, may have more direct impacts on the plants’ future status than climate alone. Further analysis is required to ascertain the risk posed by localised environmental factors such as the pathogen spread, and to understand what this means for endemism in Madagascar. References: Patricia Chapple Wright et al. “The Progressive Spread of the Vascular Wilt Like Pathogen of Calophyllum Detected in Ranomafana National Park, Madagascar.” Frontiers in Forests and Global Change 3 (2020). DOI: https://doi.org/10.3389/ffgc.2020.00091. Tonos, J., et al., Individual-based networks reveal the highly skewed interactions of a frugivore mutualist with individual plants in a diverse community. Oikos, 2022. DOI: https://doi.org/10.1111/oik.08539. Tonos, Jadelys et al., unpublished (field surveys 2018-2022).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Predicting spatio-temporal patterns of Lantana camara in a savannah ecosystem

Authors: Lilly Schell, Konstantin Müller, Merzdorf Maximilian, Emma Else Maria Evers, Drew Arthur Bantlin, Dr. Sarah Schönbrodt-Stitt, Dr Insa Otte
Affiliations: Department of Remote Sensing, Institute of Geography and Geology, University of Würzburg, (2) Conservation and Research Department, Akagera National Park
Invasive alien species represent the second greatest threat to global biodiversity, disrupting ecosystems, outcompeting native species, and ultimately contributing to widespread ecosystem degradation. In this context, modelling species distribution is critical for managing invasive species, as reliable information on habitat suitability is essential for effective conservation and rehabilitation strategies. This study aims to model the suitable habitat and potential distribution of the notorious invader Lantana camara (Lantana), in the Akagera National Park (1 122 km2), Rwanda, a savannah ecosystem. Spatio-temporal patterns of Lantana from 2015 to 2023 were predicted at a 30-m spatial resolution using a presence-only species distribution model in Google Earth Engine, implementing a Random Forest classification algorithm. The model incorporated remote sensing-based predictor variables, including Sentinel-1 SAR and Sentinel-2 multispectral data. Furthermore socio-ecological parameters and in situ occurrence data of Lantana were employed. Around 33 % of the study area was predicted to be suitable Lantana habitat in 2023. Habitat suitability maps indicated higher vulnerability to Lantana invasion in the central, and most northern, and southern parts of the Akagera National Park compared to the eastern and western regions for most years. Additional change detection analysis exhibited an increase in habitat suitability in the northeastern park sector and a decrease in the southwestern part of the park over the study period. The model's predictive performance was robust, demonstrated by high scores on threshold-independent metrics. AUCROC values, which assess the model´s ability to distinguish presence from absence sites, ranged from 0.93 to 0.98, while AUCPR values, focusing on accurate presence predictions, ranged from 0.79 to 0.94. Key factors influencing Lantana habitat suitability in the study area included the road network, elevation, and soil nitrogen levels. Additionally, the red edge, shortwave and near-infrared Sentinel-2 bands were identified as essential within the Random Forest classification, highlighting the efficacy of combining remote sensing and socio-ecological data with machine learning techniques to predict invasive species distributions. These results offer valuable guidance for developing successful conservation strategies to protect savannah ecosystems and mitigate Lantana spread in the future. Moreover, the methodological approach of this study provides a robust framework that is intended to be applied to comparable ecosystems facing similar challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite Data-Driven Mapping of Tropical Forest-Savanna Transitions on a Global Scale

Authors: Matúš Seči, Carla Staver, Dr David Williams, Casey M. Ryan
Affiliations: School of GeoSciences, University of Edinburgh, Department of Ecology and Evolutionary Biology, Yale University, School of Earth and Environment, University of Leeds
Forest-savanna transitions are thought to be the most widespread ecotone in the tropics. These transition zones form unique ecosystems of mosaic habitats supporting substantial biodiversity and providing a variety of ecosystem services to local populations. However, ecosystem mosaics occurring within the transition zones are often misunderstood and mislabelled as degraded forest remnants rather than unique ecosystems which makes them increasingly endangered by forest-centric management practices. At the same time, forest-savanna transition zones have received relatively little focus from researchers compared to the core areas of these biomes. Existing work focuses on local-scale understanding of the ecological processes but has not provided a systematic assessment of the extent and distribution of transition zones or evaluated their intactness and conservation status on a continental or global scale. This limits our ability to understand change and to conserve these areas effectively as they become more threatened by the global environmental change and anthropogenic pressures. Here we conduct the first satellite data-driven mapping of natural forest-savanna transition zones on a global scale using vegetation structural variables. By calculating rate of change of tree cover through space across the tropics, we identified savanna-forest transition zones across all the major tropical regions. We evaluated the intactness of these zones using remotely sensed land cover maps of anthropogenic land uses such as agriculture and deforestation and quantified the overlap of these areas with maps used for conservation planning. Next, we quantified the degree of tree-cover patchiness in the transition zones, to assess how common natural ecosystem mosaics are, given their importance for biodiversity. Finally, we described the climatic space in which these transition zones occur and quantified environmental drivers which have been shown to influence forest-savanna coexistence such as topography, fire occurrence, hydrological dynamics and soil properties to understand the relative importance of these drivers across the different zones. This work represents the first step towards understanding the distribution and intactness of, and the processes within, forest-savanna transition zones on a global scale. The map of natural forest-savanna transition zone will serve as a basis for further investigation into the spatiotemporal dynamics of these unique ecosystems and help inform ecosystem conservation efforts and management practices in the tropics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.06.03 - POSTER -Validation of GNSS-RO and GNSS-R observations from small sats

GNSS-Radio Occultation (RO) for atmospheric sounding has become the first Pilot Project of integrating institutional (e.g. from MetOp) and commercial RO measurements into operational Numerical Weather Prediction (NWP) led by NOAA and EUMETSAT. The path for this achievement was preceded by a number of studies for Calibration, Data Quality and Validation through Impact assessments including complementary observations from other sensors. Innovation continues in GNSS-RO, with for example Polarimetric-RO, and more studies are on-going and can be presented in this session.

A number of GNSS-Reflectometry (GNSS-R) commercial missions have been launched since in the last 10 years mostly driven by wind-speed applications, and more are planned for 2025 like ESA Scout HydroGNSS with significant innovations and with primary objectives related to land applications. Like for GNSS-RO, a number of Data Quality and Validation studies are on-going or being planned, and if successful, GNSS-R could also make it to operational systems.

This session is intended for the presentation of this kind of studies related to the assessment of GNSS measurements typically from miniaturised GNSS EO receivers in commercial initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EDAP+ Atmospheric domain: SPIRE GNSS-R assessment

Authors: Leonardo De Laurentiis, Gabriele Mevi, Dr. Chloe Helene Martella, Sabrina Pinori, Dr. Clement
Affiliations: Esa
SPIRE Global is a data and Analytics Company that collects GNSS-R data from its constellation composed of Lemur-2 satellites. GNSS-R is an opportunity measurement in which the signals emitted by GPS satellites and their reflections on the ground are collected by the Lemur-2 constellation and processed. Within the Earthnet Data Assessment Project (EDAP+), SPIRE GNSS-R products have been analyzed and compared with reference satellites and ground measurements. The products analyzed are the SPIRE Ocean Products, Surface Wind Speed and Mean Square Slope (MSS), and Soil Moisture. The analysis has been conducted following the EDAP+ guidelines, and it is composed by a Maturity Matrix referred to the products documentation, and an intercomparison exercise on the measurements. In this work we present the procedures followed and the results of the assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The impact of assimilating GNSS Radio Occultation data on the sub-seasonal forecasts

Authors: Katrin Lonitz, Dr Sean Healy, Frederic Vitart
Affiliations: ECMWF, ECMWF
Sub-seasonal forecasting is a difficult time range for weather forecasting because it is often considered as a too long-time scale for the atmosphere to retain enough memory of its initial conditions, and too short for the boundary conditions, like ocean, sea-ice or land, to vary enough to provide predictability beyond persistence. It is often assumed that the main sources of sub-seasonal predictability come from the ocean or land variability. So far, only few studies have assessed the impact of atmospheric observing systems on sub-seasonal forecasts using data denial experiments (OSEs). This lack of atmospheric OSEs represents an important gap in our understanding of sub-seasonal forecasting performance. There is a clear need to assess the impact of the current atmospheric observing system on sub-seasonal forecasts. This would help to better understand which observing systems have the largest impacts on sub-seasonal prediction and as a consequence help provide guidance on the implementation of future observing systems. The value of assimilating GNSS Radio Occultation (RO) data on medium-range Numerical Weather Prediction (NWP) is now well established in many operational systems. The present study investigates whether GNSS-RO observations also have a measureable impact on the sub-seasonal forecast-range. The impact was measured by running two large sets of 32-day ensemble re-forecasts over the extended winter periods from 2020 to 2023 which were initialised from an analysis with, and without GNSS-RO assimilated. Results indicate a statistically significant improvement in the reforecast skill scores up to week 4 in the stratosphere, particularly over the Tropics. The impact in the troposphere is generally negligeable. However, the amplitude of the Madden Julian Oscillation (MJO) is significantly stronger during the first two weeks when the reforecasts are initialised from the analysis with GNSS-RO assimilated, suggesting a potential link between MJO prediction and the initialisation of the stratosphere. Given these encouraging results for GNSS-RO, the need for new sub-seasonal impact experiments with other observing systems is suggested.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Exploring different microphysics assumptions with Polarimetric Radio Occultations

Authors: Antía Paz, Ramon Padullés, Estel Cardellach
Affiliations: Institute of Space Sciences (ICE-CSIC), Institute of Space Studies of Catalonia (IEEC)
The Polarimetric Radio Occultation (PRO) technique consists on tracking signals transmitted by GPS satellites and captured by Low Earth Orbit (LEO) satellites as they rise or set behind the Earth's limb. This approach extends the capabilities of the traditional Radio Occultation (RO) method by not only measuring vertical profiles of thermodynamic variables but also incorporating polarimetric effects. Unlike standard RO, PRO employs two orthogonal linear polarizations—horizontal (H) and vertical (V)—for its receiving antennas, enabling relevant insights into atmospheric conditions. Since its deployment aboard the PAZ satellite in 2018, the GNSS-PRO concept has been successfully demonstrated. More recently, in 2023, it has been implemented aboard three of Spire Global’s commercial CubeSats. The polarimetric capability of PRO allows the retrieval of vertical profiles of differential phase shift (ΔΦ), which represents the difference in the phase delay between the H and V polarizations. Heavy precipitation events, characterized by oblate spheroid-like hydrometeors, induce a positive differential phase shift as the PRO signals traverse such phenomena. Consequently, this technique provides unique information into the microphysical properties of these precipitation events. The primary hypothesis that PRO onboard PAZ is sensitive to oblate raindrops has been conclusively validated. Furthermore, it has been unexpectedly demonstrated that PRO is also sensitive to frozen hydrometeors. The technique's performance has been corroborated through comparisons with two-dimensional data like the IMERG-GPM products and three-dimensional data from the NEXRAD weather radars. Ongoing analyses are directed toward understanding the sensitivity of PRO to various microphysical parameterizations derived from the Weather Research and Forecasting (WRF) model and particle habits modeled using the Atmospheric Radiative Transfer Simulator (ARTS). The variation of the model’s microphysics parameterizations allows for the study of the PRO technique’s sensitivity based on different assumptions about hydrometeors. Changes in these parameterizations impact total precipitation, vertical structure of hydrometeors, cloud properties, energy budget, spatial structure, among others. The validation and sensitivity study of the PRO technique will contribute to an enhanced understanding of the observable obtained and will offer insights into the phenomena characterizing intense precipitation situations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Grazing-Angle Ionospheric Delay on GNSS-R: Findings from the ESA PRETTY Mission Observations.

Authors: Mario Moreno, Maximilian Semmling, Florian Zus, Georges Stienne, Andreas Dielacher, Mainul Hoque, Jens Wickert, Hossein Nahavandchi, Milad Asgarimehr, Estel Cardellach, Weiqiang Li
Affiliations: German Aerospace Center (DLR), Deutsches GeoForschungsZentrum (GFZ), Université Littoral Côte d’Opale (ULCO), Technische Universität Berlin (TUB), Beyond Gravity Austria GmbH (BGA), Technische Universität Graz (TUG), Norwegian University of Science and Technology (NTNU), Institute of Space Sciences (ICE-CSIC), Institute of Space Studies of Catalonia (IEEC)
Space weather can affect the operation of both spaceborne and ground-based systems, impacting daily human activities. The ionosphere, an ionized layer of Earth's atmosphere extending from approximately 50 to over 1,000 kilometers in altitude, experiences perturbations in Total Electron Content (TEC) due to variations in space weather conditions. Consequently, TEC serves as a parameter for monitoring potential ionospheric effects of space weather. TEC represents the total number of electrons present along a path between a Global Navigation Satellite System (GNSS) transmitter and a receiver, inducing a delay in the transmitted signal. Although GNSS-based infrastructure for ionospheric monitoring is well-developed, coverage gaps remain in remote areas and over oceans. GNSS Reflectometry (GNSS-R) has emerged as an important technique for atmospheric sounding, providing reliable information to complement data where conventional measurements are unavailable. This study aims to estimate the ionospheric delay using observations from the single-frequency ESA Passive REflecTomeTry and dosimetrY (PRETTY) mission. PRETTY is a pioneering GNSS-R satellite operating on the L5/E5 frequency, primarily aimed at altimetry and sea ice detection applications at very low elevations. Neutral atmospheric corrections on code delay observations are applied using a ray-tracing tool that utilizes data from the ERA5 reanalysis model, allowing isolation of the ionospheric delay component. The estimated relative ionospheric delay (reflected w.r.t the direct signal) from six events in the North Pole region shows close alignment when compared with the Neustrelitz Electron Density Model (NEDM2020), the NeQuick model, and the International Reference Ionosphere (IRI) model, with relative variances ranging from 0.5% to 18% during days with high solar activity (F10.7 = 224). This indicates that the estimation accurately accounts for the first-order ionospheric delay from GNSS-R code data, which is proportional to the Total Electron Content. The relative ionospheric delay reaches its maximum (negative) value at approximately 3° elevation (at the specular point) due to the higher contribution of the delay from the direct signal. At elevations around 7.2°, the contribution from the reflected signal counteracts that of the direct signal, resulting in a cancellation point. This point can be associated with the peak electron density height, providing insights into the vertical structure of the ionosphere.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.09.04 - POSTER - Glaciers - the other pole

Glaciers are distributed around the world in mountainous areas from the Tropics, to Mid-latitudes, and up to Polar regions and comprise approximately 250,000 in number. Glaciers currently are the largest contributors to sea level rise and have direct impacts on run-off and water availability for a large proportion of the global population.

This session is aimed at reporting on latest research using EO and in situ observations for understanding and quantifying change in glacier presence, dynamics and behaviour including responses to changes in climate, both long term (since the Little Ice Age) and in the recent satellite period. EO observations of glaciers come from a large variety of sources (SAR, Altimetry, gravimetry, optical) and are used to derive estimates of ice velocity, surface mass balance, area, extent and dynamics of both accumulation and ablation, characteristics such as surging, glacier failure, and downwasting as well as associated observations of snow pack development and duration, lake formation, glacier lake outbursts (GLOF) and slope stability.

Presentations will be sought covering all aspects of glacier observations but in particular efforts to derive consistent global databases e.g. GlaMBIE, ice velocity and area (Randolph Glacier Inventory) as well as variation in run-off and water availability and interfaces between these observations and glacier modelling to forecast possible future glacier changes and their impact on hydrology and sea-level rise.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Recent modification of Miage Glacier: using EO to monitor the evolution of the Alpine glaciers in the context of Climate Change

Authors: Francesco Parizia, Walter Alberto, Marco Giardino, Enrico Borgogno Mondino, Luigi Perotti
Affiliations: Univeristy Of Rome Sapienza, Department of Civil, Construction and Environmental Engineering (DICEA), Arpa Piemonte, Univeristy Of Turin, Department of Earth Science, Univeristy Of Turin, Department of Agriculture, Forest and Food Sciences
Miage Glacier is the third glacier in the Italian Alps in terms of areal extension. It is situated on the southern side of the Mont Blanc massif (Val Veny, Italy). Furthermore, Miage Glacier is one of the largest debris- covered glaciers in the Alps. The debris cover, a layer of rock that blankets its surface, plays a crucial role in its mass balance and overall behavior. While debris can insulate the glacier from solar radiation, reducing melt rates, it also alters the glacier&#39;s albedo, making it more susceptible to absorbing heat. Miage Glacier has experienced significant changes in recent decades, including accelerated retreat, thinning and local instability. These changes are driven by combinations of factors, including rising air temperatures, reduced snowfall, and altered precipitation patterns. In recent years, the Italian Glaciological Committee (CGI) has carried out continuous and annual studies on the Miage Glacier using different techniques. Therefore, the reconstruction of the Glacier&#39;s evolution, first with historical aerial photographs and then with the use of more modern technologies such as satellite images, terrestrial laser scanner and digital photogrammetry (from drone or helicopter), has allowed us to understand the glacier&#39;s evolution. The Miage Glacier is a perfect natural laboratory in which to observe all kinds of changes in morphology and dynamic glacier response to Climate Change. In this framework we can observe consequences connected with the glacial body lowering, such as moraine instability and the creation of important numbers of supraglacial lakes and also risky events such as Glacial Lake Outburst Floods (GLOF). Technologies derived from photogrammetry (3D measurments) have made it possible in recent years to quantify the glacier volume loss at about 100 billion liters of fresh water over the period 2008-2022. Considering that in the period 1958-2008, the detectable volume loss is about 85 billion liters. In addition, proximity surveys have also made it possible to monitor risky events such as progressive moraine instability and GLOF events, like the one that occurred on July 11, 2022 with a sudden emptying of about 400’000 m 3 of water from Miage Lake. In addition, within the context of glacial monitoring, the use of satellite data fits perfectly. Using data such as those provided by Sentinel 2, which are completely free, we can continuously monitor surface changes. Therefore, we can also monitor through the use of spectral indices, the changes occurring on the surface of the glacial body. An example of this is the formation or emptying of lakes, which emphasize how the processes of glacier evolution itself have changed in recent years. Glacial dynamics monitoring in the Alpine environment is a key, especially in the current context of Climate Change. In particular, understanding the glacier&#39;s response to the current climatic emergency allows a more effective relationship with the environment. Monitoring allows for greater awareness of the natural hazards that may result and a better management of the water resource in the communities downstream of the glacier.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote Sensing Data Downscaling for High Mountain Glaciers

Authors: Mariia Usoltseva, Prof. Dr. Roland Pail, Dr. Christoph Mayer, Dr. Martin Rückamp
Affiliations: Technical University of Munich, Bavarian Academy of Sciences
Glaciers are crucial components of the Earth's climate system and serve as indicators of climate change. Their substantial mass loss due to global warming significantly contributes to sea-level rise and impacts regional hydrology, downstream ecosystems and settlements. Despite considerable advancements in observational and modelling techniques, accurately quantifying glacier responses to climate change and predicting their future behaviour remain complex challenges, particularly in regions characterized by rapidly changing glaciers and complex topography. One of the key limitations in this field remains the availability of high-resolution regional datasets. In this study, we investigate the application of remote sensing data downscaling techniques to improve spatial and temporal resolution of glacial mass balance. We focus on the integration of relatively high-resolution surface elevation changes derived from satellite altimetry with coarse-resolution mass changes inferred from satellite gravimetry data to localize mass changes. This study mainly focuses on the glaciers of the Patagonia region. This region, characterized by rapid glacier retreat and complex climatic influences, serves as an ideal case study for integrating multiple satellite datasets and regional models. This approach aims to improve local assessments and provide a transferable framework for applying remote sensing downscaling in other regions where observational data is sparse. The findings contribute to advancing the use of satellite remote sensing for cryospheric studies and underscore the importance of high-resolution datasets in tracking and predicting glacial responses to climate change. Preliminary results highlight the potential of enhanced data integration techniques to resolve sub-regional mass changes, offering insights into glacier-climate interactions in Patagonia. The potential outcomes of this work aim to benefit the field of glacial modelling. The development of a downscaled glacial mass balance dataset, tailored for regional glacial systems or even individual glaciers, holds significant promise for model forcing and data assimilation to improve the estimates of future glacial melt and hydrological processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Climatic and morphological factors controlling the development of glacial lakes in High Mountain Asia

Authors: Sheharyar Ahmad, Dr. Giacomo Traversa, Dr. Nicolas Guyennon, Dr. Franco Salerno, Mr. Luca
Affiliations: Ca' Foscari University of Venice
Glaciers in High Mountain Asia (HMA) play a crucial role in modulating the release of freshwater into rivers and supporting ecosystems. However, the glacier changes not only impact the water supply for the downstream area, but also alter the frequency and intensity of glacier-related hazards, such as glacial lake outburst floods (GLOFs). An increasing frequency and risk of GLOFs is threatening the Asian population. In this context, glacial lake inventories benefit the disaster risk assessment and contribute to predicting glacier–lake interactions under climate change. Studies in glacial lake inventories using satellite observations have been heavily conducted in the Tibetan Plateau. However, a recent glacial lake mapping is still absent for the overall HMA, although the recent availability of Sentinel-2 satellite with a resolution of 10 m. Here we present the GLACIAL LAKE INVENTORY for the entire HMA regions based on more than 1300 images of Sentinel-2 collected during the 2022 year. A semi-automated lake mapping method have been developed and validated in order to assess and reduce the uncertainty. This study aims to present: (1) an up-to-date glacial lake inventory using Sentinel-2 images for the overall HMA; (2) the rigorous validation methodology adopted to check and reduce the uncertainty; (3) the morphological factors, derived from the Randolph Glacier Inventory (4) and the climatic parameters, considering reanalysis products. Generally, this work updates the current knowledge on distribution of glacial lakes and on factors responsible for their development in High Mountain Asia.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Measure Glacier Elevation Change in Karakoram using TanDEM-X InSAR Data

Authors: Shiyi Li, Dr. Philippe Bernhard, Prof. Dr. Irena Hajnsek Hajnsek
Affiliations: Institute of Environmental Engineering, ETH Zurich, Microwave and Radar Institute, German Aerospace Center, Gamma Remote Sensing AG
Accurately measuring glacier elevation change is essential for understanding glacier mass balance and its links to climate change, water resources, and sea-level rise. The Karakoram region, home to over 20,000 km² of glaciers, is the most extensively glaciated area outside the polar regions and plays a crucial role in regional hydrology and global sea-level dynamics. Unlike many other glaciated regions, the Karakoram exhibits anomalous behavior, with glaciers showing stable or even positive mass balance in recent years—a phenomenon often referred to as the "Karakoram anomaly." This unique behavior underscores the need for high-quality elevation change measures to better understand the region's glacier dynamics and their implications for water resources and climate systems. In this work, we present glacier elevation changes in the Karakoram measured using Digital Elevation Models (DEMs) generated from TanDEM-X data. Publicly available global DEMs often suffer from temporal ambiguities due to mosaicking and post-processing, which can introduce errors in glacier studies. To address this, we generated DEMs directly from the raw TanDEM-X CoSSC data using the InSAR technique. This approach ensured high temporal precision by preserving the acquisition time of each DEM. We used TanDEM-X data from the global missions conducted during 2011–2014 and 2017–2020. By calculating elevation differences between these periods, we produced high-resolution, time-sensitive elevation change measurements for the past decade. However, maintaining time awareness in DEMs posed significant challenges in data coverage and uncertainty control, particularly in the Karakoram's complex mountainous terrain. To balance time sensitivity with data coverage, we carefully selected single-season DEMs for mosaicking to minimize seasonal bias and cross-year uncertainty. We further developed a Gaussian Process Regression (GPR)-based void-filling algorithm to address missing values in the seasonal mosaic of Differenced DEMs (dDEMs). The uncertainties in the derived dDEMs were rigorously assessed, accounting for heteroscedasticity and spatial correlations, before converting height changes into mass balance. The generated dDEM covered 1,763 glaciers in the Karakoram, spanning 14,614.40 km², equivalent to 67% of the total glaciated area. The mean mass balance for the covered glacier is -0.035±0.15 m w.e a^(-1), and elevation changes exhibited strong spatial variability among individual glaciers. High-resolution (10 m) dDEM maps revealed detailed local glacier dynamics, including kinematic waves of surge-type glaciers and terminus advancements or retreats. This study provides continued observations of glacier elevation changes over the Karakoram during the past decade (2011–2019). The processing strategy ensures the time sensitivity of elevation change measurements and enables robust evaluation of regional glacier volume and mass changes. This comprehensive dataset contributes to a deeper understanding of regional glacier volume and mass changes and their contributions to sea-level rise. References: [1] E. Berthier and F. Brun, “Karakoram geodetic glacier mass balances between 2008 and 2016: persistence of the anomaly and influence of a large rock avalanche on Siachen Glacier,” Journal of Glaciology, vol. 65, no. 251, pp. 494–507, Jun. 2019, doi: 10.1017/jog.2019.32. [2] G. Krieger et al., “TanDEM-X: A radar interferometer with two formation-flying satellites,” Acta Astronautica, vol. 89, pp. 83–98, Aug. 2013, doi: 10.1016/j.actaastro.2013.03.008. [3] S. Leinss and P. Bernhard, “TanDEM-X:Deriving InSAR Height Changes and Velocity Dynamics of Great Aletsch Glacier,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 4798–4815, 2021, doi: 10.1109/JSTARS.2021.3078084.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Combining Fully Focused and Swath Processing for Glacier Applications

Authors: Charlie McKeown, Albert Garcia-Mondéjar, Ferran Gibert, Noel Gourmelen, Tristan Goss, Sophie Dubber, Mal McMillan, Michele Scagliola, Paolo Cipollini
Affiliations: isardSAT UK, isardSAT, University of Ediburgh, Earthwave, University of Lancaster, European Space Agency
High PRF altimeters transmit pulses at a high pulse repetition frequency thus making the received echoes suitable for coherent processing on-ground. Conventional delay-Doppler processing (DDP, commonly called SAR or High Resolution) coherently integrates echoes in a burst-by-burst basis to provide single look waveforms referred to a specific ground location, which after being correctly aligned (compensating for the slant-range migration, among others), can be incoherently averaged, increasing the performance in terms of the speckle reduction and the along-track resolution compared with the traditional Low Resolution Mode and in turns in terms of geophysical retrieval. The Fully Focused delay-Doppler processing (FF-DDP, also known as Fully Focused SAR) moves one step further and intends to coherently integrate the echoes over a time longer than a burst to get an even higher along-track resolution with an improved speckle reduction with respect to DDP. Swath mode processing has been used to monitor the elevation of areas with complex topography, such as over ice sheet margins, ice caps and mountain glaciers, improving upon the resolution and coverage of conventional radar altimetry. Swath mode relies on an accurate angle of arrival of the measured echo, this is obtained from the SAR Interferometric mode of CryoSat-2 and CRISTAL, and post-processing strategies resolving the ambiguous nature of the phase measurement. The Open Burst (or interleaved) transmission mode to be implemented in Sentinel-6 and the Copenicus polaR Ice and Snow Topography Altimeter (CRISTAL) missions makes them more suitable to FF-DDP processing thanks to the uniform along track sampling of the scene. However, in the conventional Closed Burst mode (like in CryoSat-2), replicas induced by the non-uniform sampling of the Doppler spectrum will be mixed with the main echo and, in most cases, will not be able to be filtered out. The CRISTAL Mission will include Open Burst and Interferometric capabilities. It will be the first altimeter to be able to combine both methodologies to increase both the along and across-track resolutions, improving the current performances of CryoSat-2 over small glaciers that can't be observed properly. In this presentation we present the results from the assessment and show the impact of the combined Fully Focused and Swath solution within the CLEV2ER Land Ice & Inland Water project. We will show the improvement in performance over complex terrain when utilising FF-DDP processed data, compared to conventional DDP data, using Swath processing.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping annual summer glacier fronts and a proxy metric of calving intensities with Sentinel-1 Extra Wide Swath mode.

Authors: Jörg Haarpaintner, Manon Tranchand-Besset, Dr Heidi Hindberg, Dr Valentin Pillet
Affiliations: NORCE Norwegian Research Centre, i-Sea
The retreat of melting glaciers and increased calving activities are dramatic evidence of climate change, ice mass loss and the biggest cause of sea level rise. The glacier front lines of marine terminating glaciers can be highly variable in time and are a balance between the glacier flow, i.e. surging, and calving, i.e. the breaking off of ice from the glacier’s terminus, which floats away as icebergs or growlers. This is especially the case for the sea-ice free summer month July to September. The icebergs and growlers then directly influence the marine environment, for example, by providing habitat for the marine fauna, inducing fresh water in the water column, and transporting sediments. Optical satellite sensors such as Sentinel-2 provide accurate results for glacier front lines delimitation but the time frequency of monitoring are limited by the persistent cloud covers in the Arctic and allow mainly snapshots of the glacier front under cloud-free condition. Cloud-penetrating synthetic aperture radar (SAR) data from the two Sentinel-1 (S1) A &B satellites of the European Copernicus Program however provide consistent time series of observations since 2015 for statistical analysis. Over the Svalbard archipelago, the S1 acquisition plan is dominated by the use in extra-wide swath (EW) mode, acquiring observation in medium resolution of 25m on a quasi-daily basis in HH/HV dual polarization. The EW mode is intensively used for operational sea-ice monitoring, but otherwise often neglected for other applications. Only two of the S1 paths over Svalbard acquire in interferometric wide-swath 10m high-resolution (IWH) mode but in different dual polarization set-ups, VV/VH and HH/HV, respectively, each providing only one acquisition every 12 days per satellite, i.e. a maximum of five acquisitions per month when both S1 A&B were operational from 2017 to 2021. In this presentation, instead of providing a snapshot at a specific time, the whole time-series of daily Sentinel-1 EW mode acquisitions over Kongsfjorden in the north-west of Svalbard is analyzed to provide a statistically defined summer glacier fronts for Kronebreen and other glaciers for the years 2015 to 2024. The method is based on Haarpaintner and Davids (2021) that has been developed to map the intertidal zone in Norway into classes of atmospheric exposure by calculating backscatter percentile mosaics of dense S1 IWH time-series. Hence the summer glacier fronts are extracted by thresholding the 95th backscatter percentile mosaic from the daily sea-ice free summer months acquisitions and define thereby the glacier front as the line where the glacier prevails more than 95% during summer. These glacier front lines are then compared to glacier fronts extracted from Sentinel-2 as well as to the ones from a statistical analysis of the fewer S1 IWH acquisitions in higher resolution. The comparison reveal the high variability of the glacier front position during summer of several hundred of meters. In addition to defining the glacier front, the S1 SAR also detects floating icebergs and growlers in the waters in front of the glaciers. Using lower percentile backscatter mosaics are then used to define regions in the Kongsfjorden waters where iceberg and growlers are present 10%-25%, 25%-50%, 50%-75%, and 75%-95% of the time during summer. Although, the distribution of icebergs and growlers in the fjord highly depend on surface winds and currents, this approach still provides a proxy metric of summer calving intensities. In the outlook, we will provide ideas of future research of how to better assess, quantify and validate the distribution of icebergs and growlers in regard to their concentrations in the fjord.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unlocking the Potential of Airborne Hyperspectral Thermal Infrared Remote Sensing for Monitoring Debris-Covered Glacier Dynamics

Authors: Gabriele Bramati, William Johnson, Glynn Hulley, Bjorn Eng, Gerardo Rivera, Robert Freepartner, Simon Hook, Kathrin Naegeli
Affiliations: TIRLab - RSL - Department of Geography, University of Zurich, NASA Jet Propulsion Laboratory - California Institute of Technology
Debris covered glaciers (DCGs) are present in every mountain range on Earth. Debris layers on glaciers affect melt, morphology, evolution and overall dynamics. While a thin debris layer enhances melt, thicker debris insulates the underlying ice. However, the 3-dimensional spatial and temporal distribution of debris layers is still poorly understood. Innovative datasets are needed in order to map debris layer characteristics and to disentangle the interplay between debris, climate, and glacier response over time. Among the available Earth Observation (EO) data for Alpine glaciers, thermal infrared (TIR) datasets have been underexploited due to the lack of suitable resolution and general availability, despite their great potential. TIR observations allow estimation of land surface temperature (LST), which can be used to estimate the glacier debris energy balance, extent, and thickness of debris. In addition, debris lithologies can be distinguished using multi- or hyperspectral TIR data. In this contribution, we present a unique remote sensing dataset with unprecedented spatial and spectral resolution over both a DCG, and a clean-ice glacier. We surveyed two different alpine glaciers in the Swiss Alps, one debris-covered (Zmuttgletscher) and one clean-ice (Findelgletscher), with the Hyperspectral Thermal Emission Spectrometer (HyTES) developed at NASA-JPL, in addition to various in-situ measurements (thermal infrared point measurements, debris thickness excavations, meteorological observations, ablation measurements etc.). HyTES is an airborne imaging spectrometer, with 256 bands in the 7.5-12 µm wavelength range at a ground surface distance of about 3 m. The acquired dataset allows for testing algorithms and processing schemes in sight of future TIR satellite missions (such as TRISHNA, SBG, LSTM), which will open new frontiers for global glacier studies. We calibrated the DCG survey exploiting the clean-ice glacier survey. We then present validation results between HyTES data and in situ temperatures on the DCG, and discuss the distribution of LST in regard to different debris characteristics. In addition, a lithological map has been produced utilizing airborne and laboratory derived emissivity spectra, in combination with chemical analysis for mineralogical composition and silica weight percent. Finally, we discuss applications such as glacier debris thickness estimation, distributed energy balance modelling for sub debris melt estimation, and address scaling effects of multi-source remote sensing datasets.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards the Regional Snowline Estimates at Sub-Seasonal Scale in Central Asia

Authors: Dilara Kim, Mattia Callegari, Enrico Mattea, Ruslan Kenzhebaev, Erlan Azisov, Tomas Saks, Martina Barandun
Affiliations: Department of Geosciences, University Of Fribourg, Institute of Earth Observation, EURAC research, Central-Asian Institute for Applied Geosciences (CAIAG)
Central Asia's glaciers are critical to the region's freshwater supply during the dry summer months, supporting the region's agriculture and hydropower sectors. It is therefore imperative to better understand the glacier response to ongoing and future climate change and the potential impact on regional water resources. The Central Asian mountain ranges, Pamir and Tien Shan, encompass over 25,000 glaciers, yet glaciological measurements are scarce especially after the collapse of the Soviet Union. Existing gaps and sparse spatial coverage of the glaciological measurements time series restrict regional assessment of seasonal and annual glacier changes. A promising approach to infer annual glacier mass balance is based on the position of the end-of-summer snowline, which marks the transition between snow and bare-ice surfaces; at the end of the melting season, it approximates equilibrium line altitude (in the absence of superimposed ice). Snow and glacier ice have distinct spectral characteristics and thus are suitable to map remotely. We designed a novel method to retrieve snowlines from the MODIS surface reflectance product, that covers the period since the beginning of the 21st century. To bridge the coarse spatial resolution of MODIS we used a statistical relationship between MODIS reflectance and snowline derived from the high-resolution data of Sentinel-2 and Sentinel-1, available since 2015 and 2016 respectively. The resulting time series provides high spatially and temporally resolved snowlines. The method was tested on the selected glaciers in Central Asia; sub-seasonal snowline evolution was compared to the modelled glaciers daily melt contribution. We further demonstrate the potential to use retrieved snowlines to better constrain surface mass balance models. Our approach is suitable for larger scale assessments, thanks to the implementation based on the cloud computing service of Google Earth Engine and the use of well-established processing algorithms. In our contribution we present regionally applied glacier snowline estimates for Central Asia and provide insight on the seasonal snowline dynamics as a proxy for glacier mass balance of the last 25 years. Our study reveals the potential of snowline monitoring for a better understanding of glacier mass balance changes and sub-seasonal glacier melt contribution to runoff for remote and inaccessible regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Recent changes at Jostedalsbreen ice cap revealed by repeat UAV and satellite data

Authors: Benjamin Robson, Dr Harald Zandler, Dr Jakob Abermann, Professor Jonathan Carrivick, Ms Daria Ushakova, Dr Sven Le Moine Bauer, Dr Thomas Scheiber, Daniel Thomas, MSc Alexander Maschler, Dr Gernot Seier, Dr Liss Andreassen, Professor Jacob Yde
Affiliations: University Of Bergen, University of Graz, Western Norway University of Applied Sciences, Julius-Maximilians-Universität Würzburg, University of Leeds, Independent Researcher, The Norwegian Water Resources and Energy Directorate
Jostedalsbreen, the largest ice cap in mainland Europe, covered an area of 458 km² as of 2019, representing approximately 20% of the total glacier-covered area in mainland Norway. The ice cap plays a crucial role in regional hydrology and serves as an important indicator of climate change impacts in the Nordic region. Previous research has shown that the ice cap is experiencing a net mass loss, but these findings are mostly based on analyses over decadal timescales, leaving short-term dynamics less understood. This study aims to address the gap in understanding short-term, high-resolution changes by focusing on a four-year period from 2020 to 2024, utilising data from Unmanned Aerial Vehicles (UAVs), airborne LiDAR, and high-resolution satellite imagery. Our analyses enable us to study recent changes at eight outlet glaciers of the Jostedalsbreen ice cap at a decimetre scale. We examine volumetric surface changes, horizontal glacier flow rates, and surface metrics such as roughness and rugosity. By integrating these high-resolution datasets, we can detect subtle changes in glacier morphology and dynamics that are not apparent in longer-term studies. This approach allows us to compare recent glacier change rates with those observed over longer decadal scales since the mid-20th century, providing more detailed insights into the cryospheric response to recent climatic variations. We further extend our analysis by examining a time series of Sentinel-1 Synthetic Aperture Radar (SAR) images acquired every 12 days between 2020 and 2024. This dataset allows us to study the spatial and temporal distribution of wet and dry snow over the entire ice cap and assess the duration and altitudinal distribution of snowmelt throughout the ablation and accumulation seasons. The high temporal frequency of the SAR data enables the monitoring of seasonal transitions and extreme melt events, which are critical for understanding the ice cap's response to short-term climatic fluctuations however the resulting time series is complicated due to the strong backscatter responses from icefalls on several outlet glaciers, We anticipate that integrating surface melt occurrence and frequency data with observed glacier changes will enhance our understanding of the ice cap's dynamics, contributing to more accurate predictions of its future evolution and informing regional water resource management and climate adaptation strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Glacier mapping using Deep Neural networks in the Tropical Andes

Authors: Diego Pacheco Ferrada, Dr Thorsten Seehaus
Affiliations: Friedrich-Alexander Universität Erlangen-Nürnberg
Glaciers in the Tropical Andes have experienced a significant and accelerated decrease over the last decades, mainly driven by climatic variables affected by climate change. Glaciers in the Andean regions no only provide important hydrological services as water reservoirs for consumption of communities downstream and economic activities, but also they play fundamental role in sustaining high altitude environments and as well as cultural beliefs. Despite importance, only few studies have address the mapping and volume changes evaluation on regional and multitemporal scale. Most of them have focus on specific areas and/or glaciers in Peru or Bolivia. Furthermore, an increase in debris covered-glaciers extent have been observed in similar regions, which impose new challenges in mapping them, specially considering with conventional threshold methodologies. Therefore, this study we aim to generate updated and temporally consisted outlines of the Tropical Andes Glacier by implementing a fully automatic routine supported by machine-learning approaches, which can be suitable to evaluate the ice volume change over the last decade in the tropics. For the mapping process, we present binary and multiclass segmentation approaches using state-of-art deep learning architectures to map the glacier extent across the Tropical Andes.Here, the Glacier-VisionTransformer-U-Net (GlaViTU) -a hybrid deep learning model composed by a segmentation transformer inline with a convolutional- was trained for a large-scale glacier delineation considering the most recent Peruvian glacier inventory from INAIGEM (Instituto Nacional de Investigación en Glaciares y Ecosistemas de Montaña) which consider data from 2020 and includes debris-free and debris-cover glaciers segmentation. For training, the model was feed with diverse remote sensing data, such as: optical (Sentinel-2), topographic features (Elevation and slope from Copernicus DEM) and synthetic aperture radar (SAR) data (Sentinel-1 backscatter and coherence in ascending and descending orbits). Once trained, the model have successfully reproduced the overall glaciers extent of the Peruvian Andes, with acceptable uncertainties values. Our results show that the binary segmentation (glacier/no-glacier) show the best performance (IoU) compared to the multiclass approaches (No glacier/debris-cover glacier, debris-free glacier). In the case of multiclass segmentation, shows better results debris cover detection when applying a multiclass classification after masking data with binary segmentation. However, debris cover areas are challenging both multiclass approaches and show higher uncertainty in the binary approach. Nonetheless, coherence maps from repeat-pass and multiple orbits have shown to improve the differentiation between debris-cover and debris-free glacier areas, as well as mitigate the impact of shadowed and overlayed areas typically found in such mountainous environments. Moreover, the model shows that even in presence of clouds that can partially occlude the glacier surrounding, they have not affected the delineation. This improvement is particularly important in regions where cloud-free optical images are difficult to acquired. Our study underscore the importance of combining remote sensing data to improve automated glacier mapping, particularly in areas with scarped topographies and potentially growing debris-cover extent. These results highlight the potential for multitemporal glacier monitoring in the entire Tropical Andes; they allow us to periodically map the glacier extent over the complete Tropical Andes, and to evaluate temporal evolution and volume changes over the last decade in combination remote sensed DEM, such as TanDEM-X acquisitions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Estimating Frontal Ablation at High Temporal Resolution in Svalbard With Sentinel-1 SAR Imagery and a Deep Learning Model

Authors: Dakota Pyles, Nora Gourmelon, Vincent Christlein, Dr Thorsten Seehaus
Affiliations: FAU Erlangen-Nuremberg, Institute of Geography, FAU Erlangen-Nuremberg, Pattern Recognition Lab
Frontal ablation is a key component of tidewater glacier mass loss, yet high temporal resolution estimates remain elusive due to difficulty in reliably capturing terminus position changes with satellite imagery. Recent development in automated delineation of glacier calving fronts, using machine learning techniques, has opened an opportunity to calculate frontal ablation over fine timescales. By segmenting Sentinel-1 synthetic aperture radar (SAR) image sequences with a deep learning-based terminus segmentation algorithm, we aim to quantify a decade of seasonal and annual frontal ablation from 2015-2024 for ~150 tidewater glaciers in Svalbard – results are expected in spring 2025. To calculate frontal ablation, workflow pipelines consist of the pre- and post-processing of Sentinel-1 SAR images to extract glacier termini, the creation of regional training data to assist the segmentation algorithm, the application of climate mass balance model outputs, and the generation of monthly ice flux calculations; ice flux estimates primarily leverage Sentinel-1 SAR-derived velocity fields, with Sentinel-2 optically-derived velocities resolving glaciers that have poor Sentinel-1 coverage. The resultant frontal ablation information is valuable to glacier models, which may benefit from high-resolution reference data and lead to improved calibrations and parameterizations. Svalbard, an Arctic region characterized by variable glacier and fjord geometries, served as a methodological test site and we now intend to expand the project scope by applying this method to the Canadian Arctic, Russian Arctic, Greenland periphery, and Alaska, or ~1240 additional marine-terminating glaciers in the Northern Hemisphere. Future project efforts will focus on mass budgeting for all glaciers in the study by integrating frontal changes and climatic mass balance data with geodetic mass balance estimates derived from TanDEM-X. To identify and evaluate external drivers of glacier change, the frontal ablation and mass balance products will be correlated with modeled and observational atmospheric, oceanic, and sea ice data. Through multivariate statistical analyses between these Earth system datasets and mass budget components, we look to provide an improved understanding of dynamic tidewater glacier processes, their spatio-temporal variability, and the influence of glacier geometry on observed changes across the Arctic.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Long-term albedo glaciers variations in Pakistan: a focus on the Hushe Basin

Authors: Blanka Barbagallo, PhD Davide Fugazza, Dr Lorenzo Raimondi, Guglielmina Adele Diolaiuti
Affiliations: Università Degli Studi Di Milano
Glaciers are highly sensitive to climate change and serve as key indicators of its impacts. Among glaciological parameters, albedo plays a crucial role in understanding glacier health and surface energy balance. In this study, we analyze albedo variations of all glaciers in Pakistan from 2013 to 2023, identifying significant trends and subsequently focusing on the Hushe basin, which exhibited the greatest reduction in albedo during this period (-0.8). The analysis utilizes the Harmonized Landsat Sentinel-2 product (HLSL30v002) to study albedo trends between 2013 and 2023. This product is the results of a set of pre-processing algorithms, including atmospheric correction, cloud and shadow masking, illumination and view angle normalization, and spectral bandpass adjustment, therefore, after applying another cloud mask, we were able to compute the broadband albedo values. For the second part of the study, focusing on the Hushe basin, we extend the analysis over a longer period (1984–2024) using Landsat 5 and Landsat 8 Tier 1 imagery. For these latter the products have to be corrected before being able to compute the albedo values, therefore for the study two different GEE scripts were developed. The Hushe basin, located in the buffer zone of the Central Karakoram National Park, is characterized by its complex topography and significant glacier coverage (477.37 km² across 315 glaciers). Despite this data, the area has received limited attention in glaciological research. Its elevation profile, with two-thirds of glacier area between 4700 and 5700 m a.s.l., and smaller fractions above 6000 m (5.79%) and 7000 m (0.14%), provides an unique opportunity to study elevation-dependent albedo variations. Preliminary results indicate that 90% of Pakistan's glacier basins exhibit a slight increase in average albedo values over the last decade. However, an elevation band analysis reveals that higher-altitude glaciers (above 6000–6500 m a.s.l.) show greater variability and instability, while lower-altitude glaciers display more consistent trends, likely due to their larger number and area. The Hushe basin, with its pronounced elevation variability and complex glacier dynamics, provides an ideal case to further investigate these preliminary findings. The outcomes of this study are expected to enhance our understanding of regional climate dynamics and support the development of strategies to mitigate climate change impacts and sustainably manage natural resources.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Comparing Glacier Surface Velocity Methods with Satellite and UAV Imagery - the Example of Austerdalsbreen

Authors: Harald Zandler, Jakob Abermann, Benjamin Aubrey Robson, Alexander Maschler, Thomas Scheiber, Jonathan L. Carrivick, Jacob Clement Yde
Affiliations: Department Of Geography And Regional Science, University of Graz, Department of Earth Science, University of Bergen, Department of Civil Engineering and Environmental Sciences, Western Norway University of Applied Sciences, School of Geography and water@leeds, University of Leeds
Global warming causes profound changes in glacier dynamics, which has strong impacts on natural hazards, sea-level rise and river discharge. A key component of these dynamics is glacial surface velocity and various remote sensing methods exist for its quantitative analysis. At the scale of mountain glaciers, relatively high spatial resolution is required to achieve sufficient accuracy for a detailed understanding of glacier flow dynamics and associated changes. Thereby, traditional methods, such as different implementations of cross-correlation techniques, are ideal for slow-moving glaciers (<30 m per year) or low surface deformation between image acquisitions, but are often limited in cases of strong surface changes and large ranges in flow velocities. Additionally, the suitability of remote sensing sensors varies according to their resolution and noise. Therefore, we compare and evaluate different sensors and methods to determine (sub-seasonal) surface velocities during the one-year period 2023-2024 at Austerdalsbreen, an outlet glacier of the Jostedalsbreen ice cap, Norway, with surface velocities from 5 m to more than 100 m per year. To include several resolutions, we select different high-resolution platforms (UAV surveys resampled to 0.15 m and 0.6 m, 3 m PlanetScope imagery) and a moderate-resolution product (10 m Sentinel-2 data) for our analysis. We utilize respective sensors with traditional cross-correlation techniques, feature tracking algorithms (e.g., ORB) and novel, deep-learning based feature matching approaches. We evaluate the derived velocities with manually mapped displacements that are based on high-resolution orthoimagery (< 0.05 m). Our results indicate limitations of cross-correlation methods in cases of large surface velocity variations with high-resolution data. The medium-resolution sensor Sentinel-2 showed more robust results for some fast-moving regions, but lower performance in other parts of the glacier. Novel deep-learning techniques illustrate promising results and, applied to UAV datasets, resulted in accurate surface velocities over most parts of the glacier. In summary, our study demonstrates strengths and limitations of traditional and innovative state-of-the-art methods and sensors, thereby contributing to the derivation of essential glacier metrics with remote sensing approaches in a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Coupling the MODIS and LANDSAT products to investigate the land surface temperature trends in High Mountain Asia

Authors: Sheharyar Ahmad, Dr. Giacomo Traversa, Dr. Biagio Di Mauro, Dr. Nicolas Guyennon, Dr. Franco Salerno, Mr. Luca
Affiliations: Ca' Foscari University of Venice
The surface temperature is a key parameter of the surface energy budget and influences a range of physical processes within the critical zone and high mountain regions that host glaciers, snow cover, and permafrost are particularly sensitive to increasing temperatures. However, ground-based instrumental monitoring of surface temperature is difficult to implement in remote mountainous areas with steep hillslopes. Alternatively, satellites offer the possibility to measure the land surface temperature (LST) with different ranges of spatial and temporal resolution. Many LST studies rely on data from the MODIS sensor, but the assessment of the reliability of using this information in high mountainous regions is limited due to the low availability of station-based surface temperature data. This is particularly true in High Mountain Asia, where such data are practically absent. From a methodological point of view, here we propose a coupled use of thermal bands from MODIS, LANDSAT, and ground station data in order to validate the multi-decadal LST trends. On the parallel, from scientific perspective, a cooling trend has been observed at Himalayan high-elevations, close to the main glacier masses (https://doi.org/10.1038/s41561-023-01331-y). These recent findings deserve to be further investigated through satellite thermal products together with the possible implications of LST on permafrost and vegetation evolution under global warming.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Drivers of Proglacial Lake Colour in Iceland

Authors: Natasha Lee, Professor Andrew Shepherd, Dr Emily Hill
Affiliations: The Centre for Polar Observation and Modelling (CPOM), Northumbria University, Newcastle University
Proglacial lakes often form due to the availability of meltwater at a glacier margin. As glaciers retreat due to the effects of climate change, the area of proglacial lakes has increased. The greatest increase in proglacial lake area and volume is currently occurring in the Arctic. This research investigated the relationship between proglacial lake colour and suspended sediment concentration across Iceland. Spatial variation in the colour of proglacial lakes in Iceland was quantified from high resolution PlanetScope satellite imagery. Suspended sediment concentration was calculated from water samples collected by both autonomous vehicle and near shore methods from proglacial lakes in September 2024. Investigating the differences between the sediment within these proglacial lakes is expected to provide a clearer understanding on the causes of the (variation in) colour of proglacial lakes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: F.04.20 - POSTER - EO in support of the regulation on Deforestation-free products (EUDR, EU 2023/1115)

Faced with mounting global environmental concerns and the urgency of addressing climate change, the EU has introduced the ground-breaking regulation on Deforestation-free products (EUDR, EU 2023/1115) targeting global deforestation. The EUDR ensures that seven key commodities – cattle, cocoa, coffee, palm oil, soy, timber, and rubber – and their derived products like beef, furniture, and chocolate, entering the EU market from January 2026 onwards, are not linked to deforestation after a defined cut-off date (December 2020).
The regulation obliges operators to establish robust due diligence systems that guarantee deforestation-free and legal sourcing throughout their supply chains to achieve this goal. Verifying compliance with these standards is crucial. The EUDR mandates using the EGNOS/Galileo satellite systems and exploiting the Copernicus Earth Observation (EO) program for this purpose. This involves, among others, cross-referencing the geographic locations of origin for these commodities and products with data from satellite deforestation monitoring.
By providing precise and detailed information on deforestation linked to commodity expansion, Copernicus and other EO data/products will help to detect fraud and strengthen the implementation of the policy by diverse stakeholders.
This session will delve into the latest scientific advancements in using EO data to support due diligence efforts under the regulation, including global forest and commodities mapping.
Topics of interest mainly include (not limited to):

- Classification methods for commodities mapping using EO data;
World forest cover and land use mapping with EO Data;
- Deforestation and GHG/carbon impacts related to commodity expansion;
- Field data collection strategies for EUDR due diligence;
- Practical examples of EO integration in global case studies;
- Machine learning / AI for deforestation detection and change analysis;
- EUDR compliance strategies: Integrating EO data with other datasets;
- Traceability in the Supply Chain: EO Data for Transparency.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From GEE to CODE-DE: Transforming Deforestation Monitoring for EUDR Compliance and Global Forest Protection

Authors: Fatemé Ghafarian, Dr Melvin Lippe, Dr Margret Köthke
Affiliations: Thünen Institute Of Forestry
The European Union Deforestation Regulation (EUDR) (EU 2023/1115) establishes critical requirements to ensure that products placed on the EU market are free from deforestation and forest degradation. This regulation mandates the verification of land-use practices to prevent deforestation-linked products from entering the Union market, aiming to safeguard global forests and align with international climate goals. Effective implementation of the EUDR requires robust monitoring systems capable of delivering reliable risk assessments at multiple levels, supporting national authorities in their inspection and enforcement tasks. The CODED (Continuous Degradation Detection) algorithm, developed to detect deforestation and forest degradation using Earth observation data, has proven highly effective in identifying land-use changes. Originally implemented on Google Earth Engine (GEE), CODED leverages Sentinel-1 and Sentinel-2 data to analyze deforestation patterns over time. However, GEE's server infrastructure and data processing architecture are not compliant with the German Federal Office for Information Security (BSI) and EU legal standards for secure and lawful data storage, making it unsuitable for official applications under the EUDR for the case of Germany. To address these compliance issues, the RiMoDi (Risk-based Monitoring Service for Deforestation) project is transferring the CODED algorithm from GEE to CODE-DE, a German cloud platform designed for secure Earth observation data processing. This migration ensures that the monitoring tools align with the stringent security and operational requirements of the German competent authority responsible for EUDR compliance checks, the Federal Office for Agriculture and Food (BLE). The transfer involves adapting the algorithm to CODE-DE’s infrastructure, configuring virtual machines for secure access, and integrating process chains into government intranets. The presented study focuses on the technical implementation challenges and solutions involved in migrating CODE-DE processing chains from Google Earth Engine to CODE-DE. We rely on geolocation data from Cote d’Ivoire to test the migrating and implementation. By leveraging CODE-DE’s capabilities, the RiMoDi project establishes a secure and scalable monitoring framework, enabling Germany to enforce the EUDR effectively. This approach not only ensures compliance with EU regulations but also enhances the national capacity for long-term environmental monitoring, contributing to global efforts to combat deforestation and forest degradation. Keywords: EUDR, Deforestation, Google Earth Engine, CODE-DE
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Readiness of Ethiopia's Coffee and Ghana's Cocoa sector for EUDR compliance

Authors: Kalkidan Ayele Mulatu
Affiliations: Alliance Bioversity-ciat
The European Union Deforestation Regulation (EUDR) marks a significant milestone in global efforts to mitigate deforestation and forest degradation caused by high-demand agricultural commodities. Aligning with key EU policy frameworks such as the European Green Deal and the Farm to Fork Strategy, the EUDR seeks to ensure sustainable production and consumption patterns. However, its stringent requirements pose unique challenges to smallholder farmers (SHFs) in developing countries, particularly those reliant on forest-related commodities like coffee and cocoa. This study examines the implications of the EUDR on SHFs in Ghana and Ethiopia, two major cocoa and coffee producers respectively, and assesses their readiness to meet EUDR traceability and due diligence requirements. By analyzing the transparency and operational demands of the EUDR, the research identifies gaps in technical infrastructure, digital capacity, and national datasets that are critical for compliance. Additionally, it highlights the risk of disproportionately disadvantaging SHFs in least developed countries with limited resources, potentially favoring better-equipped competitors. To address these challenges, the study proposes context-sensitive solutions, including leveraging national platforms, open-source tools, and Earth observation technologies to streamline traceability and reduce costs. Emphasis is placed on building technical capacity and fostering equitable systems to ensure the regulation supports both environmental goals and the livelihoods of SHFs. Ultimately, the study underscores the need for collaborative, inclusive approaches to implement the EUDR effectively while balancing its environmental and socio-economic impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Commercial EO Solutions for EUDR Compliance: AI-Driven Insights for Deforestation and Degradation Monitoring

Authors: Anna Brand, Anna Seiche, Stefan Kirmaier, Jonas
Affiliations: Remote Sensing Solutions GmbH
One-third of the world’s forests have already been cleared because of agricultural expansion, contributing significantly to environmental degradation, biodiversity loss, and the acceleration of climate change. In response, the European Union introduced the EU Deforestation Regulation (EUDR, EU 2023/1115) as a policy solution, which mandates that certain products and their derivatives entering the EU market, such as cocoa, coffee, palm oil, soy, beef, rubber, and wood, must not be associated with deforestation. The effective operational implementation of such initiatives, however, requires innovative tools that provide rigorous monitoring and verification systems that provide traceable and independent information of deforestation free supply chains. Earth Observation (EO) data plays a critical role in this process with satellite missions offering a constant stream of objective data on how forests are changing in near-real-time on a global scale. Our approach addresses the need for practice-oriented monitoring and streamlined data processing by integrating the full time series of Sentinel-2 imagery with advanced artificial intelligence (AI) methodologies and efficient cloud processing capabilities. Combining these strengths, we developed a scalable platform-based solution for companies and regulators to ensure supply chain transparency and compliance. Existing methods often rely on third-party land cover or deforestation datasets to assess compliance for each plot of land where the commodities were produced. This dependency on global layers introduces risks of inaccuracies at local scales and limits timeliness, as such datasets are only available at specific intervals. In contrast, our bitemporal approach systematically compares all image pairs within the complete time series of satellite data at local scale enabling the continuous provision of actionable updates. The system employs convolutional neural networks (CNNs) which are especially suited for recognizing spatial patterns in satellite imagery. This allows for highly accurate detection of deforestation and degradation. The algorithms are trained on manually labeled datasets, that are specifically designed to distinguish between natural forest loss and tree cover loss as seen in plantation clearings. This differentiation allows an analysis independent from third-party data and reduces the risk of misclassifications, particularly in plantation-heavy regions while mitigating potential supply chain disruptions or regulatory fines. Building on this capability, the system also incorporates advanced detection of forest degradation, recognized as an early warning sign of deforestation. It allows stakeholders to identify risks within the supply chain, enabling proactive, data-driven interventions to safeguard the sustainability of sourcing practices. Validated through quantitative and qualitative assessments, including fieldwork in Indonesia and performance metrics analyzed across independent test scenes, the service ensures robust, transaprent and reliable insights. By leveraging cloud infrastructure, our system has been integrated into a platform that enables scalable analysis of globally distributed sourcing areas. Featuring a user-friendly dashboard and API, the platform offers an effective solution to monitor compliance with regulatory requirements while optimizing operational efficiency. To demonstrate its functionality, the service will be showcased using a real-world example from the tropical region, where challenges such as cloud cover and rapid land-use change complicate monitoring efforts. The system’s output includes compliance reports tailored to meet EUDR requirements, illustrating how businesses and regulators can use the platform to ensure transparency and traceability. This solution represents an important step in the commercialization of EO services. It not only addresses regulatory compliance but also highlights the broader potential of EO and AI in sustainability monitoring and nature-based solutions. By delivering actionable insights in near-real-time, it empowers industries and regulators to advance environmental stewardship while ensuring resilient, sustainable supply chains for the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Implementing Commodity Mapping and Change Detection Services in the Control System for EU Regulation 2023/1115 (EUDR)

Authors: Marco Corsi, Laura De Vendictiis, Simone Tilia, Fabio Volpe, Colonel Giancarlo Papitto, Pasquale Pistillo
Affiliations: e-GEOS S.p.A., Via Tiburtina 965, Rome, 00156, Italy, https://www.e-geos.it, ARMA DEI CARABINIERI, CUFAA office projects, https://www.carabinieri.it/chi-siamo/oggi/organizzazione/tutela-forestale-ambientale-e-agroalimentare, STARION, https://www.stariongroup.eu/
EU Regulation 2023/1115 (EUDR) establishes strict requirements for preventing commodities linked to deforestation and forest degradation from entering the EU market. It mandates traceability, monitoring, and compliance mechanisms for commodity supply chains to mitigate environmental impacts and ensure sustainable practices. Effective implementation of these measures requires advanced monitoring systems capable of integrating multi-source data for comprehensive land-use analysis. This work presents a methodology for commodity mapping and change detection designed to support EUDR compliance. Commodity mapping utilizes a Vision Transformer (ViT)-based classifier [1],[2] applied to time series data from Sentinel-2 imagery. The approach leverages spectral and temporal features to classify land cover and monitor the presence of specific crops, such as coffee, within the framework of agricultural land use analysis. Forest monitoring is performed using a bi-temporal change detection approach based on Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical data. The SIROC algorithm is employed to detect changes in land cover using spectral and spatial features, enabling the identification of deforestation and forest degradation. SiROC (Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images) algorithm is an effective methodology for Land Cover Change detection using minimal bi-temporal images. The SIROC algorithm [3] is an unsupervised change detection method that incorporates spatial context awareness to enhance detection accuracy in optical satellite imagery. SIROC leverages the spatial relationships between pixels to effectively distinguish between true land cover changes and noise or transient phenomena.Pre-processing steps include radiometric correction, cloud screening, atmospheric adjustment, and vegetation index computation to ensure consistent input data. The methodology incorporates an optional quality check step to validate outputs, reducing uncertainties and improving reliability. The proposed system integrates EO-based techniques with optional validation workflows, providing a scalable tool for tracking land-use changes and monitoring compliance with EUDR. The presentation will illustrates an example of a map of soybean and coffee crops generated using the Land Cover processor, specifically designed for remote monitoring of agricultural regions. The map, produced from Sentinel-2 data, highlights distinct crop areas: orange polygons represent soybean fields, while brown polygons indicate coffee plantations. An inset displays a typical Normalized Difference Vegetation Index (NDVI) cycle for soybean in Brazil, which is used to track crop growth phases, showing both the first and second harvest periods. This kind of remote sensing facilitates precision agriculture and crop management in geographically isolated areas. 1. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv preprint arXiv:2010.11929 (2020) 2. Oguiza, I.: TSiT: PyTorch implementation based on ViT (Vision Transformer). Available at: https://timeseriesai.github.io/tsai/models.tsitplus.html 3. Kondmann, L., Toker, A., Saha, S., Schölkopf, B., Leal-Taixé, L., & Zhu, X. X. (2022). Spatial Context Awareness for Unsupervised Change Detection in Optical Satellite Images. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–15. https://doi.org/10.1109/TGRS.2021.3130842
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Traceability in the Supply Chain: EO Data for Transparency

Authors: Yu Dong, Zahra Dabiri
Affiliations: EXDIGIT, University of Salzburg
As the climate crisis intensifies, Earth Observation (EO) technologies play a vital role in advancing sustainability, supporting global regulatory compliance, and achieving the United Nations Sustainable Development Goals (SDGs), particularly SDG 12 (Responsible Consumption and Production) and SDG 15 (Life on Land). The European Union Deforestation Regulation (EUDR, EU 2023/1115) exemplifies how EO can address these challenges, targeting deforestation-free supply chains for key commodities such as coffee, cocoa, palm oil, and timber. Effective from January 2026, the EUDR mandates robust due diligence systems to verify legally sourced and deforestation-free products after December 2020. Central to achieving this is the integration of EO data, particularly from the Copernicus program provided by the European Space Agency (ESA), for example, Sentinel-2 optical and Sentinel-1 synthetic aperture radar (SAR) data and derived products to provide comprehensive monitoring capabilities. This work explores how the synergy of optical and SAR EO data enhances supply chain transparency and traceability and supports EUDR. Sentinel-1 SAR capabilities, unaffected by cloud cover, provide continuous monitoring, especially, when the optical data cannot be used due to atmospheric conditions, complementing the spectral richness of Sentinel-2 imagery to enhance spatial and temporal precision. However, the applicability of SAR data within the complex environment is influenced by sensor characteristics, such as wavelength and target characteristics, such as type and geometry. By integrating time series analysis of SAR and optical, we demonstrate the strengths of these datasets to enable the detection of land-use changes and deforestation trends, even in challenging conditions such as tropical cloud cover or areas with spectral confusion, like shaded coffee plantations. Machine learning and geospatial analysis further improve the accuracy of deforestation alerts and land cover classification, addressing the complexities of distinguishing between forest and commodity plantations. We demonstrate a practical case study and illustrate how EO technologies empower stakeholders—including regulators, industry operators, and auditors—to validate commodity origins, detect non-compliance or fraud, and ensure alignment with the EUDR requirements. The practical case study focuses on coffee plantation monitoring using time series SAR and optical EO data, covering the period of 2019 to 2024 and utilizing machine learning techniques, such as random forest. The results will demonstrate the strengths and challenges of SAR and optical EO data utilization, such as data accessibility, accuracy variability, and integration challenges. By identifying these constraints and exploring potential solutions, this work aims to uncover opportunities for enhancing EO’s effectiveness in monitoring deforestation and promoting sustainable supply chains. This highlights EO’s transformative potential to advance regulatory compliance, foster collaborative climate action, and support the achievement of global sustainability goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: How to support smallholders in proving EUDR compliance? A feasibility study

Authors: Florian Schlenz, Johannes Sommer, Jenny Franco, Michael Holzapfel, Rita Lunghamer, Stefan Scherer
Affiliations: Geocledian GmbH
As a reaction to global deforestation and climate change EU has adopted the potentially disruptive Deforestation-Free Regulation (EU 2023/1115). From January 2026, seven key commodities and their derivatives - cattle, cocoa, coffee, oil palms, rubber, soy and wood - that enter the EU market must be deforestation-free and from legal sources. Every importer needs to prove this by providing sourcing information down to the plot level. An essential and at the same time probably the most difficult requirement is to record the production areas and locations of small farmers in particular. EO data can be used to provide the transparency needed for the deforestation check for each plot of land and thus support supply chain traceability. Typically, these checks are integrated in supply chain traceability platforms. Violation of the EUDR results in the product being excluded from the market, which represents an enormous economic risk for companies. The obligations to provide evidence and their short-term implementation pose enormous challenges for companies, their suppliers and producers, including smallholders. Different challenges need to be solved at every point in the supply chain. However, there is a particular risk for small farmers in developing countries, who produce large quantities of these raw materials. In the case of coffee, for example, they are responsible for 70% of global production. If smallholders are unable to provide the required evidence, there is a very real risk that these will be excluded from the European market. The consequences are low revenues and further impoverishment of small farmers. It will have to be seen whether the EUDR’s traceability mandates truly support sustainable practices or inadvertently exclude smallholders, impacting their economic stability and market access. So, how can smallholders be supported in achieving compliance with the EUDR requirements? In the frame of the “EUDR-Check” project (funded by BMWK Germany, grant number 16GM103702) we have developed and tested an App-based approach to support cocoa and coffee smallholders in demonstrating compliance with the EUDR. The app allows to record production areas and confirm their conformity with the EUDR criteria in the form of a certificate. Smallholders can do this with a free app and pass on the certificates digitally with a traceability solution, using blockchain technology. The solution is free for smallholders, while the costs are borne by certificate users and traceability users. Buyers of cocoa/coffee can thus comply with EU regulations and maintain market access to the EU. At the same time, access to the EU market is guaranteed for local producers in compliance with EUDR standards, which promotes sustainability in production without disadvantaging smallholders. The App is linked to an EO-centered EUDR compliance Check API built on top of a powerful and scalable IT system that can support large amounts of users. The solution is addressing multiple issues: 1. Smallholders are enabled to capture the geolocation of their plot of land in a very simplified manner. 2. The EUDR deforestation compliance is automatically checked based on this EO-driven solution. In cases of negative results additional information can be collected on site to mitigate a negative result. 3. Buyers are provided with the mandatory geolocation including the deforestation compliance at no additional effort. 4. The integrity of the geolocation, the compliance check and the traded good is secured by the traceability solution applied and the blockchain backend 5. The smallholders are not baring additional costs for the EUDR compliance With this integrated solution, cocoa and coffee buyers can fulfill the due diligence obligations of the EUDR economically and efficiently. At the same time, access to the EU market is guaranteed for local cocoa and coffee producers, as the producers themselves meet the technical requirements of the EUDR by localizing the production site and embedding it in the entire product supply chain. In this way, the solution contributes to greater sustainability in cocoa and coffee production without adversely affecting small-scale producers. In the frame of the feasibility study we are evaluating: - the technical solutions of an EO based EUDR deforestation compliance check - a test implementation of the data collection app for smallholder farmers - the acceptance of the app - the data flow through the supply chain - the business model - challenges along the way We will report on the first findings of the project and introduce our API-based EO-driven EUDR Compliance Check methodology used in this project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Satellite-Based Forest Monitoring for Accurate and Cost-Efficient Compliance With the EU Deforestation Regulation Through Standardized Benchmarking, Ground-Truthing, and Integration of Advanced Technologies.

Authors: Anton Eitzinger, Koimé Kouacou
Affiliations: Veo Partners
The adoption of the EU Deforestation Regulation (EUDR), which requires certain imported products to be deforestation-free, has created a strong demand for reliable monitoring solutions. In response, the satellite-based forest monitoring market has experienced rapid growth, driven by the need for accurate and efficient methods to ensure compliance with the regulation. This trend has been particularly significant for businesses in industries such as agriculture and forestry, where advanced monitoring solutions are essential to meet the EUDR's stringent requirements. Satellite-based forest monitoring provides distinct technical advantages tailored to deforestation-free requirements. For instance, it enables the collection of high-resolution data across vast forested areas, allowing businesses to monitor entire provenances without relying on limited ground-based assessments. It also reduces the need for frequent physical site inspections, significantly cutting costs and logistical complexity. Moreover, satellite systems offer near real-time tracking of forest conditions, enabling companies to quickly identify and address deforestation risks or breaches. Compared to manual verification methods, satellite monitoring is not only more scalable but also provides a cost-efficient way to ensure transparency and compliance with regulatory demands. While satellite-based forest monitoring is a powerful tool for ensuring compliance with the EU Deforestation Regulation (EUDR), it has several notable shortcomings that can impact its effectiveness and lead to the misclassification of deforestation or forest degradation in certain areas. A significant limitation is the inability to differentiate tree species or forest types, such as distinguishing between primary forests and planted forests, unless there are obvious physical differences detectable in the imagery. Furthermore, while satellite data provides a broad overview, it often lacks the granularity needed for reliable assessments, making ground-truthing through field observations a necessary step to validate the data. Technical constraints also arise from canopy penetration limitations, as optical sensors can only capture light reflected from the top of the forest canopy, leaving understory conditions largely invisible. This limitation can obscure important details about forest health and biodiversity. A false positive—incorrectly identifying compliant land as deforested—could result in producers, particularly vulnerable smallholder farmers, being unjustly excluded from the European premium market. On the other hand, a false negative—failing to detect actual deforestation—could expose operators or traders to regulatory breaches, potentially resulting in fines of up to 4% of their annual turnover, a substantial penalty for non-compliance. To address these issues, we propose a benchmarking framework to evaluate and improve the reliability of satellite-based forest monitoring systems in addressing these limitations. The framework will establish standardized metrics for assessing system performance, including precision, recall, and cost-efficiency, to provide commercial users with a transparent basis for selecting suitable monitoring solutions. A key component of this approach involves integrating ground-truthing efforts with the participation of smallholder farmers, leveraging their local knowledge to validate satellite data and improve detection accuracy. By fostering collaboration between stakeholders and embedding smallholder contributions into the verification process, the framework ensures inclusivity and fairness while enhancing data reliability. By aligning these advancements with the specific requirements of the EUDR, this approach supports businesses in navigating the regulatory landscape, minimizes the risk of misclassification, and mitigates associated economic impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Fine Scale Cocoa Mapping With Deep Learning Methods

Authors: Kasimir Orlowski, Filip Sabo, Dr. Astrid Verhegghen, Dr. Michele Meroni, Dr. Felix Rembold
Affiliations: FINCONS S.P.A., European Commission, Joint Research Centre, ARHS Developments Italia S.R.L., Seidor Consulting
Mapping and characterizing cocoa planted areas with Earth Observation data and accurately disentangling them from other land cover is not only paramount for effectively monitoring and reporting on sustainability goals related with cocoa production but also for the EU Deforestation Regulation. However, accurately representing the complexity of the cocoa planted area is a challenging task. Cocoa is grown mostly on smallholder plantations with various agricultural practices, ranging from mono-cultural plantations to agroforestry systems with cocoa shaded by other trees with varying densities and spatial distribution. Here we combine a curated dataset of cocoa plot location and very high resolution (VHR; 0.5m) multispectral satellite imagery covering ∼33% of Ivory Coast area, in a deep learning framework to map cocoa. The selected deep learning model is based on a U-net architecture with efficient-netb5 encoder. To train the model, batches of tiles of 512x512 pixels were used and two sample sizes were tested: i) 221,158 and ii) 2,069,855 (full dataset) tiles. Both samples were split into 70% training and 30% validation. An independent and randomly selected VHR image (66,244ha) served as a test set. Despite the heterogeneity of cocoa plantations, our model was able to generalize well and to differentiate between cocoa and non cocoa areas accurately at this unprecedented spatial resolution. Results show that the improvement related to the use of a larger sample was limited (F1: +2.3%) and not proportionate considering the increase in training time (22h to 153h). The best performance metrics on the test set with the first (smaller) sample size gave a F1 score of 0.92 with Precision and Recall of 0.93 and 0.91 respectively. Building on the results of this study, current work focuses on the characterization of the shading level of cocoa plantations, by applying the Meta canopy height prediction model (Tolan et al. 2024) to the same set of VHR in order to separate larger non cocoa trees from cocoa trees.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Continental-Scale Tree Crop Mapping in South America

Authors: Yuchang Jiang, Anton Raichuk, Stefan Istrate, Dan Morris, Katelyn Tarrio, Nicholas Clinton, Dr. Vivien Sainte Fare Garnot, Prof Konrad Schindler, Professor Jan Dirk Wegner, Maxim Neumann
Affiliations: Google DeepMind, University of Zurich, Google Research, Google Geo, ETH Zurich
Tree crop expansion in South America, a global production hotspot, contributes significantly to economic development but also drives deforestation and habitat loss within crucial ecosystems like the Amazonian rainforest. Accurate and high-resolution tree crop maps are crucial for sustainable land management, supply chain transparency, and the effective enforcement of regulations like the European Union Deforestation Regulation (EUDR) [1]. This study presents a novel deep learning approach for continent-wide, high-resolution (10-meter) tree crop mapping in South America. Leveraging a transformer-based architecture, our model effectively integrates multi-modal, multi-temporal Sentinel-1 and Sentinel-2 satellite data. To train this model, we have constructed a large-scale dataset of 100,000 samples evenly distributed across the continent, encompassing diverse forest, tree crops (including coffee and oil palm), and non-woodland classes. We use this extensive and diverse dataset to train our segmentation model and generate a continental-scale, 10-meter resolution map of tree crops for 2020. Our resulting tree crop map reaches high accuracy on two independent validation datasets for coffee in Brazil and oil palm in Peru, outperforming existing baseline methods. Comparative analysis reveals that our map consistently distinguishes tree crop areas within the generalized forest class in Brazil, Peru, and Colombia. The research once more highlights the power of deep learning for accurate, large-scale vegetation monitoring. Our high-resolution map provides valuable information for diverse stakeholders, supporting decision-making in service of conservation efforts, sustainable development planning, and compliance with regulations aimed at reducing deforestation through agricultural expansion. By enabling precise identification of areas converted from natural forest to tree crop plantations, our work directly contributes to the implementation of the EUDR and promotes responsible land management practices in South America. Keywords: Tree Crop Mapping, Remote Sensing, South America, Deforestation, Sustainability, EUDR, Deep Learning, Sentinel-1, Sentinel-2, Transformer References: [1]: European Union, REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the making available on the Union market and the export from the Union of certain commodities and products associated with deforestation and forest degradation and repealing. Regulation (EU) No 995/2010 https://data.consilium.europa.eu/doc/document/PE-82-2022-INIT/en
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Global Mapping of EUDR Commodities for Better Forest Baselines and Identifying Deforestation Drivers

Authors: Michel Wolters, Nikoletta Moraiti, PhD Luca Foresta, Niklas Pfeffer, PhD Niels Anders, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Mapping soft commodities such as oil palm, coffee, soy, and cocoa is critical for implementing the European Union Deforestation Regulation (EUDR), which aims to prevent deforestation linked to the production of goods imported into the EU. Accurate mapping ensures transparency in supply chains, enabling the identification of drivers of deforestation for commodity production. Furthermore, it is important to distinguish old-growth commodities from natural forests in maps that function as a forest baseline to track deforestation, rather than other land cover changes, accurately. At Satelligence, we create commodity maps for given target years, using a combination of in-house modelling and third-party openly available layers. This way, we keep our forest baseline up-to-date while we can attribute deforestation events to a specific commodity. We will present our methodology and results for a number of relevant commodities, such as oil palm and soy. As input data to our models, we use Sentinel-1, Sentinel-2, and Landsat imagery (along with derived metrics and indices) processed with an engine that uses FORCE for optical data preprocessing and ISCE for Sentinel-1 radar preprocessing to generate analysis-ready composites. Our approach employs a tile-based system for collecting, training and testing samples, building classification models, evaluating results, and seamlessly merging tiles into a unified global map. Creating the training data involves using anonymized plot data provided by clients and partners, which is used as input for semi-supervised learning methods which serve as preliminary qualitative assessment, enabling the automated filtering of irrelevant land cover pixels and isolating those that correspond to the target land cover. By reducing the need for fully manual digitization and labelling, this approach ensures efficiency while maintaining accuracy. The filtered and labeled samples, combined with the feature data, are then used to construct a sample database. This database serves as the foundation for the machine learning models, facilitating precise and scalable land cover mapping. Furthermore, we implement a decision-tree-based model that integrates commodity classifications across multiple years, minimizing the need for extensive postprocessing while enhancing accuracy. Finally, the accuracy of the maps is independently assessed and discussed, and we additionally show comparisons against openly available datasets
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Approaching the EUDR by a combination of crowd sourcing and remote sensing

Authors: Manuela Hirschmugl, Nik Cepirlo, Koimé Kouacou, Caroline Kunesch
Affiliations: Joanneum Research, University of Graz, Beetle4Tech, Boku University
According to the United Nations Food and Agriculture Organization, the world has lost 420 million hectares of forest through deforestation over the past 30 years significantly affecting the forests’ multiple and highly essential functions. Agricultural expansion is estimated to cause almost 90% of global deforestation. Seven forest risk commodities (FRC) represent with almost 84% the largest share of EU-driven deforestation: palm oil, soy, timber, cocoa, coffee, beef and natural rubber. EU-consumption of these FRCs is responsible for about 10% of global deforestation. The European Commission has acknowledged these facts and related responsibility and has therefore proposed a regulation to put an end to causing deforestation. The current proposal for a regulation on deforestation-free products and commodities (EUDR) comprises a legal framework based on mandatory requirements for due diligence for companies placing forest and ecosystem-risk commodities and derived products on the EU market. One of the main tasks to prepare for a future EUDR implementation is to use satellite positioning systems and the Copernicus Earth Observation (EO) program for this purpose. A multitude of projects and studies in the past have shown possibilities and success stories of deforestation mapping by remote sensing (Hamunyela, 2017; Hansen, 2016; Kennedy et al., 2010), yet difficulties remain in areas with agro-forestry systems (Mananze et al., 2020) and for detecting forest degradation. These difficulties can be attributed to the similarity of spectral response for the classes to be separated and the class-inherent spectral heterogeneity. Several approaches use the full potential of the time series to overcome the issues of spectral similarity at one point in time (Verbesselt et al., 2010; Zhu et al., 2012) while at the same time providing data in a timelier manner ‘near-real-time’ (Puhm et al., 2020; Zhu et al., 2016). Nevertheless, uncertainties remain relatively high, thus, in-situ information is needed in many cases to improve the classifications and/or to verify the achieved data on the ground. Crowdsourcing and citizen science offer innovative opportunities to gather huge amounts of data. However, not all crowdsourced data is also in-situ data. Important examples of information provided by the crowd through image interpretation are for example the WHISP (What is in that plot?) initiative. In-situ crowd sourcing can be supported by dedicated apps guiding participants and helping to generate useful data in a fast and efficient manner. This is specifically important, as currently more and more deep learning approaches, which are known to be extremely data-hungry, will be employed. Our work focuses on a combination of in-situ crowdsourcing data collection in combination with Sentinel-2 remote sensing classification. An additional, equally important component is the analysis on the drivers for people in the global south to contribute to such crowdsourcing initiatives aiming to tackle deforestation. We tested our approaches in Côte d'Ivoire, West Africa, with different users along the supply chain—from smallholder farmers, through processors and exporters, to organizations in the broader ecosystem like the Conseil Café Cacao and certification bodies. Regarding the remote-sensing aspects, the first results show, that time series-based analysis led to higher accuracies for change mapping compared to bi-temporal change detection. Accuracy was asse with stratified sampling according to (Olofsson et al., 2014) by a person not involved in the mapping. The overall accuracy reached 68 vs. 72% for completely blind evaluation and 69 vs. 82% for plausibility evaluation for bi-temporal and time series classification respectively. The plausibility evaluation was a simple boundary evaluation: if a verification point was closer than 20 m to the boundary of a change, it was still considered correctly detected. The second evaluation considered the usability of crowd sourcing apps. Two apps were tested for a comprehensive set of parameters including geo-positioning, map integration, (multiple) photo upload, automatization options, and many more. Separating these parameters into necessary and nice-to-have, we found that ODK collect was preferable over Epicollect5 due to three main advantages: possibility to show example photos for different disturbance types for easier identification, depiction of the user’s own position and optional a target area for improved navigation and finally the possibility to provide a polygon in addition to the point information in the feedback, which tremendously helps for remote sensing applications. There are also some disadvantages of ODK collect: the backend is more difficult to set up, there are costs entailed for server hosting (or you can host your own server) and the offline data collection is more difficult to implement. Thirdly, we also investigated the accuracy of the positioning in different land cover types (dense and open forest, meadows, settlements) comparing it with professional GNSS antenna measurements. According to our (limited) sample, it seems that the type and age of the mobile device was more important in terms of accuracy than the apps used. Overall, the deviations found were below 5 m (∆x= 2,85m, ∆y=3,36m, ∆xy= 4,40m), which seems to be sufficient for most crowdsourcing applications, if the crowd is trained to move at least 5m, ideally more than 10m from any boundary before recording the point. Finally, regarding the motivation of crowdsourcing participants, interviews with a variety of stakeholders in Côte d'Ivoire already hint to the following main aspects. Most importantly, the size of rewards matters for both participants' motivation to contribute and the quality of the crowdsourcing results. Previous work by NGOs has shown that project-based remuneration or pay on a weekly basis led to better results compared to offering remuneration by record. Regarding motivation, internal aspects such as interest in mapping activities or contributing to halt climate change matter. Another aspect impacting the likelihood to contribute high-quality crowdsourcing data is education and training, since literacy and certain technological knowledge are prerequisites. Next to motivational aspects, the integration of smallholder farmers directly turned out to be challenging due to technology-access-related issues such as low smartphone density or low network-connection in remote areas. Further research, in the form of an experimental study, will investigate the relevance of intrinsic motivation and the effect of external rewards on crowdsourcing participants. This will reveal how to effectively design such campaigns, enabling to tap into the collective intelligence of various crowds and gather in-situ data, which in turn contributes to mapping deforestation efficiently and accurately. Combining these findings from crowdsourcing with remote sensing insights, an innovative approach to deforestation mapping is developed and proposed to aid in both, preparing for the implementation of the EUDR and more importantly, contribute to limit deforestation and its adverse effects on the global climate and biodiversity. References: Hamunyela, E., 2017. Space-time monitoring of tropical forest changes using observations from multiple satellites (PhD Thesis). Wageningen University & Research, Laboratory of Geo-Information Science and Remote Sensing. https://doi.org/10.18174/420048 Hansen, R., M.C.,. Krylov, A.,. Tyukavina, A.,. Potapov, P.,. Turubanova, S.,. Zutta, B.,. Ifo, S.,. Margono, B.,. Stolle, F.&. Moore, 2016. Humid tropical forest disturbance alerts using Landsat data. Environmental Research Letters 11, 034008. Kennedy, R.E., Yang, Z., Cohen, W.B., 2010. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr - Temporal segmentation algorithms. Remote Sensing of Environment 114, 2897–2910. https://doi.org/10.1016/j.rse.2010.07.008 Mananze, S., Pôças, I., Cunha, M., 2020. Mapping and Assessing the Dynamics of Shifting Agricultural Landscapes Using Google Earth Engine Cloud Computing, a Case Study in Mozambique. Remote Sensing 12, 1279. https://doi.org/10.3390/rs12081279 Olofsson, P., Foody, G.M., Herold, M., Stehman, S.V., Woodcock, C.E., Wulder, M.A., 2014. Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment 148, 42–57. http://dx.doi.org/10.1016/j.rse.2014.02.015 Puhm, M., Deutscher, J., Hirschmugl, M., Wimmer, A., Schmitt, U., Schardt, M., 2020. A Near Real-Time Method for Forest Change Detection Based on a Structural Time Series Model and the Kalman Filter. Remote Sensing 12, 3135. https://doi.org/10.3390/rs12193135 Verbesselt, J., Hyndman, R., Zeileis, A., Culvenor, D., 2010. Phenological change detection while accounting for abrupt and gradual trends in satellite image time series. Remote Sensing of Environment 114, 2970–2980. http://dx.doi.org/10.1016/j.rse.2010.08.003 Zhu, X., Helmer, E.H., Gao, F., Liu, D., Chen, J., Lefsky, M.A., 2016. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sensing of Environment 172, 165–177. Zhu, Z., Woodcock, C.E., Olofsson, P., 2012. Continuous monitoring of forest disturbance using all available Landsat imagery. Remote Sensing of Environment 122, 75–91. https://doi.org/10.1016/j.rse.2011.10.030
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Employing high-resolution data to enhance the accuracy of land use and cover classification

Authors: Dr Dr. Flávia De Souza Mendes, Dr Vivian Ribeiro, MSc Tara O'Shea
Affiliations: Planet Labs GmbH, Meridia Land, Planet Labs
Remote sensing has revolutionized the monitoring of commodity supply chains, providing essential insights into how production activities impact the environment. Unlike traditional monitoring methods that rely on self-reported data, which often lacks transparency and accuracy, remote sensing offers an objective, data-driven means of verifying sourcing practices, enabling the detection of unsustainable activities like deforestation caused by agricultural expansion. By accurately mapping areas within specific supply chains, satellite imagery empowers stakeholders, including consumers, investors, and regulatory bodies, to hold companies accountable for environmental commitments, fostering greater responsibility and transparency. Our work aims to demonstrate how high-resolution (HR) imagery can enhance the accuracy of public mapping tools. While current public maps provide useful data, HR satellite imagery can refine this information, offering a more precise look at land-use changes and environmental impacts within supply chains. With advanced spatial resolution and sophisticated data analysis, HR imagery provides a granular view, identifying individual farms or plantations and assessing their environmental performance. This enhanced granularity is key for detecting deforestation patterns in small farm plots, especially important for smallholder production, minimizing false positives. Integrating this detailed imagery with existing public data can improve the accuracy of maps used to monitor sourcing patterns, enabling more confident decision-making by governments, smallholders, and companies. Moreover, the broad scope of satellite technology allows efficient monitoring over large, remote, or inaccessible areas, offering a comprehensive view of environmental risks such as deforestation hotspots. Publicly accessible remote sensing data extends its impact by empowering diverse stakeholders. Governments can enforce environmental regulations more effectively, while civil society organizations and researchers can independently monitor deforestation, thereby strengthening forest governance. Initiatives such as Brazil’s PRODES, Global Forest Watch, and MapBiomas already employ advanced technologies to produce high-resolution deforestation data across the Amazon, contributing to transparency. In Brazil, the Rural Environmental Registry (CAR) links farm boundaries data with remote sensing, enabling property-level deforestation monitoring and providing companies with timely alerts on deforestation within their supply chains. Preliminary results had already demonstrated the improved accuracy achieved by incorporating high-resolution data into land cover classification in the city of Patrocínio, Minas Gerais, Brazil, a major coffee-producing municipality. By using PlanetScope image and height data we identified that areas classified as forest by public maps are, in fact, established coffee plantations. In conclusion, leveraging a diverse set of data sources including high-resolution satellite imagery, public maps, and near-real-time alerts is the most effective way to monitor forests, non-forest areas, and commodities. Combining these data sources increases mapping accuracy and confidence, benefiting all stakeholders involved in sustainable land management, from government agencies to smallholders and private companies. This approach promotes a more accurate and accountable framework for tracking land use and forest conservation efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An Approach for an EUDR Forest Baseline Based on a Combination of Open Data, Commodity Maps and Forest Change Detection

Authors: Michel Wolters, PhD Niels Anders, PhD Luca Foresta, Vincent Schut, Niels Wielaard, Rens Masselink
Affiliations: Satelligence
Implementing the European Union Deforestation Regulation (EUDR) has posed significant challenges, primarily due to the complexities of monitoring and verifying supply chains for deforestation-linked commodities across diverse global regions. In order to determine whether commodities are sourced from deforested areas, it is important to develop an accurate and consistent land cover map, which would be the baseline for a deforestation monitoring service. While developing a globally consistent, multi-year and high resolution land cover map from scratch is a challenging, time consuming and expensive undertaking, we present a methodology for creating a yearly historic forest baseline that can be updated annually and be used for purposes such as EUDR (with cutoff date 2020-12-31), NDPE (No Deforestation, no expansion on Peat, no Exploitation, cutoff date 2015-12-31), CFI (Cocoa & Forests Initiative, 2017-12-31) and other frameworks and regulations. The methodology consists of leveraging dozens of open data sources such as JRC Tropical Moist Forests, Greenpeace intact forest landscapes, University of Maryland Primary Forest and National Landcover Maps, and Descals et al., palm oil map. These open data layers are combined with commodity maps, mainly those relevant for EUDR, such as oil palm, cocoa, coffee, soy etc. For each layer, thorough qualitative and quantitative QA is performed, a harmonisation of land cover classes is performed to align the definitions of land cover with other datasets, and, where applicable, a consistency check is performed in overlapping areas and through time. Since not all input data is available for all years of interest, or of different individual years, we will perform backward and forward propagation of data layers through time. This is done using land cover change detection data, as well as a forest and vegetation change detection algorithm, in conjunction with aforementioned data layers. Then, based on quality and priority, distinguishing natural land cover classes and commodity land cover classes, taking into account overlapping datasets and yearly availability of datasets, a decision tree-type model is applied to determine the final landcover class of each pixel per year. This forest baseline has been independently assessed for many countries, and accuracy scores of 85-95% are the average. We will show a comparison with other forest baseline maps commonly used for e.g. the EUDR.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.01.08 - POSTER - Planetary Boundary Layer from Space

The planetary boundary layer (PBL) plays an essential role in weather and climate, which are critical to human activities. While much information about the temperature and water vapor structure of the atmosphere above the PBL is available from space observations, EO satellites have been less successful in accurately observing PBL temperature and water vapor profiles and in constraining PBL modelling and data assimilation. Improved PBL models and parameterizations would lead to significantly better weather and climate prediction, with large societal benefits.

In the latest US National Academies’ Earth Science Decadal Survey, the PBL was recommended as an incubation targeted observable. In 2021, the NASA PBL Incubation Study Team published a report highlighting the need for a global PBL observing system with a PBL space mission at its core. To solve several of the critical weather and climate PBL science challenges, there is an urgent need for high-resolution and more accurate global observations of PBL water vapor and temperature profiles, and PBL height. These observations are not yet available from space but are within our grasp in the next decade. This can be achieved by investing in optimal combinations of different approaches and technologies. This session welcomes presentations focused on the PBL, from the observational, modeling and data assimilation perspectives. In particular, this session welcomes presentations focused on future EO PBL remote sensing missions and concepts, diverse observational approaches (e.g., active sensing, constellation of passive sensors, hyperspectral measurements, high-altitude pseudo satellites) and potential combinations of techniques to optimally depict the 3D structure of PBL temperature and water vapor.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Planetary Boundary Layer Heights From GNSS Radio Occultations

Authors: Stig Syndergaard
Affiliations: Danish Meteorological Institute
Global Navigation Satellite System (GNSS) radio occultation (RO) measurements provide high-resolution vertical information about the atmosphere from near-surface altitudes and up through the troposphere and stratosphere. Using the Radio Occultation Processing Package (ROPP), which is a product of the EUMETSAT Radio Occultation Meteorology Satellite Application Facility (ROM SAF), it is possible to derive a number of different estimates of the planetary boundary layer height (PBLH) from these measurements. Earlier studies have shown reasonable agreement between the PBLH estimated from GNSS radio occultation bending angle or refractivity profiles using the gradient method (identifying the largest negative gradient in the profiles) and the PBLH estimated from other measurements or models. Among the alternatives in ROPP, one can also estimate the PBLH from the so-called dry temperature. The dry temperature, which is derived from the refractivity, diverges from the physical temperature in the lower troposphere when the water vapour pressure contributes significantly to the refractivity. Profiles of dry temperature in the lower troposphere are typically characterised by a sharp inversion, below which water vapour is abundant. At the same time, dry temperature also includes a possible inversion if such exists in the physical temperature. With a slight modification of the current ROPP algorithm, the PBLH can thus be estimated from the dry temperature without calculating the vertical gradients first. This leads to a more robust estimate. In this study, global estimates of the PBLH from derived bending angle, refractivity, and dry temperature profiles are compared to each other and to model estimates, including the PBLH based on the Bulk Richardson number being available from the ECMWF reanalysis version 5 (ERA5). It is shown that there is a very good agreement between average estimates of the PBLH from derived dry temperature and the average estimates based on ERA5 forward-modelled dry temperature, both showing a generally large PBLH in the tropics and a shallow PBLH over Greenland and Antarctica.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Low Tropical Marine Clouds and Their Interactions With Boundary Layer Dynamics Observed From ALADIN/Aeolus and SCAT/HY-2

Authors: Zacharie Titus, Marine Bonazzola, Hélène Chepfer, Artem Feofilov, Marie-Laure Roussel, Alexis Mouche
Affiliations: Sorbonne Université / LMD, CNRS / LMD, University Brest / CNRS
Previous studies have shown interactions between low clouds over the oceans and the atmospheric circulation in which they are embedded. However, few observations corroborate them at global scale. The intensity of the wind shear can trigger turbulence and entrainment, lowering cloud tops and reducing the horizontal extent of clouds by introducing dry air from the lower free troposphere in the atmospheric boundary layer (Schulz and Mellado, 2018). It can also tilt updrafts (Helfer et al., 2020) or even organize meso-scale convective systems (Abramian et al., 2022). The wind at 10 m above the ocean surface drives evaporation and introduces humidity in the atmospheric boundary layer, resulting in its deepening (Mieslinger et al., 2019 ; Nuijens and Stevens, 2012). Using co-located observations of clouds and projected wind profiles observed by ALADIN/Aeolus as well as 10 m above surface winds from SCAT/HY-2, we estimate how the circulation, at typically 10-100 km scale, in the atmospheric boundary layer, modifies the horizontal and vertical extent of low tropical clouds over the oceans. We show that the most intense cloud top wind shears are associated with lower cloud tops and horizontally smaller clouds. Meanwhile, situations with intense evaporation rates feed more moisture in the atmospheric boundary layer, leading to higher cloud tops, particularly in stratocumulus dominated regions. References : Abramian, S., Muller, C. and Risi, C.: Shear-Convection Interactions and Orientation of Tropical Squall Lines, Geophysical Research Letters, 49, https://doi.org/10.1029/2021GL095184, 2022. Helfer, K. C., Nuijens, L., de Roode, S. R., and Siebesma, A. P.: How wind shear affects trade-wind cumulus convection. Journal of Advances in Modeling Earth Systems, 12, e2020MS002183. https://doi.org/10.1029/2020MS002183, 2020. Mieslinger, T., Horváth, Á., Buehler, S. A., & Sakradzija, M.: The dependence of shallow cumulus macrophysical properties on large‐scale meteorology as observed in ASTER imagery, Journal of Geophysical Research: Atmospheres, 124, 11477–11505. https://doi.org/10.1029/2019JD030768, 2019. Nuijens, L. and Stevens, B.: The Influence of Wind Speed on Shallow Marine Cumulus Convection, Journal of the Atmospheric Sciences, 69, 168–184, https://doi.org/10.1175/JAS-D-11-02.1, 2012. Schulz, B., and Mellado, J. P.: Wind Shear Effects on Radiatively and Evaporatively Driven Stratocumulus Tops, Journal of the Atmospheric Sciences, 75, 3245–3263, https://doi.org/10.1175/JAS-D-18-0027.1, 2018.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Temperature and humidity profile retrievals from synergistic satellite (MTG-IRS) and ground-based (Microwave Radiometer, SYNOP) observations

Authors: Maria Toporov, Prof. Dr. Ulrich Löhnert
Affiliations: University Of Cologne
Spatially and temporally resolved fields of temperature and humidity within the planetary boundary layer (PBL) are the crucial variables for short-term forecasting with convection resolving numerical weather prediction models (NWP). Despite their potential positive impact on NWP analysis and forecast, both variables are still not adequately vertically, horizontally, and temporally) measured by current observing systems. The hyperspectral infrared sounder (IRS) will operate from a geostationary orbit onboard the Meteosat Third Generation (MTG) and provide an unprecedented temporally and spatially resolved view into the atmosphere. However, even hyperspectral infrared satellite observations still leave gaps in the observation of the PBL structure, mainly due to the limited vertical resolution of the satellite as well as the strong influence of the surface properties or clouds (Teixeira et al., 2021). Moreover, atmospheric profiles retrieved from hyperspectral observations show increasing uncertainty in the lowest few kilometers of the atmosphere (Wagner et al., 2024). To fill the existing observational gap in the PBL, ground-based remote sensors for measuring temperature, humidity, and wind profiles have been developed that are nowadays suitable for network operation. Especially, passive microwave radiometers (MWR) have been characterized accurately concerning their 24/7 reliability, accuracy, and information content. A network of ground-based MWRs has the potential to provide real-time, all-sky profile observations. On the European level, the first instrument networks are in the process of being established, e.g. within the European Research Infrastructure consortium ACTRIS. With our study, carried out within the Hans-Ertel Center for weather research of DWD (HErZ), we attempt to answer the question, to what extent the synergy of ground-based MWR and standard 2m temperature/humidity measurements (SYNOP) with hyperspectral infrared satellite observations (IRS) can improve temperature and humidity profiling over the ICON-D2 domain. We develop retrievals of temperature and humidity profiles based on reanalysis as the truth and applying the Neural Network (NN) approach allowing an optimal blending of IRS radiances with surface-based remote sensing observations and standard 2m meteorology over the ICON-D2 domain. We simulate satellite observations using the RTTOV model and use the MWRpy for ground-based MWRs. In the first attempt, the retrievals are developed for two stations: the Jülich Observatory for Cloud Evolution (JOYCE) and the DWD Observatory Lindenberg (RAO). Other suited ACTRIS sites will also be considered. After the launch of MTG-S, we plan to apply the developed retrievals to real MWR, SYNOP, and IRS observations and assess the impact of assimilation of obtained atmospheric profiles on the short-term forecasts of crucial variables such as low-level winds, cloudiness, atmospheric stability, and severe weather. In this contribution, we present the first results of the study including simulation of satellite and ground-based observations from reanalysis, neural network architecture, and performance of the MWR, IRS, and synergistic MWR+IRS and SYNOP+IRS retrievals applied to simulated observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: PBL Height Retrieval and Thermodynamic Characterization and Its Variability from NAST-I During the WH2yMSIE Field Campaign

Authors: Daniel Zhou, Hyun-Sung Jang, Allen Larar, Xu Liu, Anna Noe, Antonia Gambacorta, Rachael Kroodsma
Affiliations: NASA Langley Research Center, Analytical Mechanics Associates, NASA Goddard Space Flight Center
The National Airborne Sounder Testbed-Interferometer (NAST-I) suborbital system (<2.6 km IFOV; 0.25 cm-1 within 645–2700 cm-1) serves as a spaceborne instrument simulator and pathfinder for future satellite capabilities and airborne science experiments. The NAST-I measurements are made to advance understanding of science critical for weather, climate, chemistry, and radiation applications. Here we present some capabilities of NAST-I measurements and corresponding geophysical retrievals and their potential benefits toward enhancing characterization and understanding of the Planetary Boundary Layer (PBL). Initial results of PBL height estimation and thermodynamic characterization and their time evolution from NAST-I measurements during the WH2yMSIE field campaign are presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.03.04 - POSTER - Model-data interfaces and the carbon cycle

The increasing provision of the synergistic observations from diverse and complementary EO observations relevant to the carbon cycle underlines a critical challenge in generating consistency in multi-variate EO datasets whilst accounting for the differences in spatial scales, times of acquisition and coverage of the different missions. It also entails the requirement to improve models of the carbon cycle to ensure they can fully exploit the observation capabilities provided by both EO data and enhanced global ground networks. This implicitly means increasing the spatial resolution of the models themselves to exploit the spatial richness of the data sources as well as improving the representation of processes, including introducing missing processes especially those for describing vegetation structure and vegetation dynamics on both long and short timescales, while ensuring consistency across spatial scales (national, regional, global).

Understanding and characterisation of processes in the terrestrial carbon cycle, especially with reference to estimation of key fluxes, requires improved interfaces between models, in situ observations and EO. It also requires research to ensure an appropriate match is made between what is observed on the ground, what is measured from space, their variability in space and time and how processes that explain this dynamism are represented in models and hence to allow the assessment of the impacts of scale in particular how processes, operating at fine scale, impact global scale carbon pools and fluxes. This implicitly involves a close collaboration between the Earth observation community, land surface and carbon modellers and experts in different disciplines such as ecosystems, hydrology and water cycle research.

This session is dedicated to progress in model-data interfaces and the appropriate coupling of EO observations of different types, processes and variables with in-situ observations and models to ensure the observations collectively and the models are consistent and compatible.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing long-term ecosystem assessments by unifying multi-sensor Earth Observation Data with self-supervised Deep Learning

Authors: Zayd Mahmoud Hamdi, Dr. Sophia Walther, Dr. Martin Jung, Gregory Duveiller, Dr. Qi Yang, Vitus Benson, Ulrich Weber, Sebastian Hoffmann, Dr. Christian Reimers, Fabian Gans
Affiliations: Max-planck-institute For Biogeochemistry
Consistent and Continuous long-term monitoring of Earth's processes such as land surface dynamics and meteorological changes is crucial to understand their variability beyond seasonal scales as well as their implications for global models of ecosystems, the land surface and the climate. Spaceborne observations of surface reflectance, land surface temperature, soil moisture, and other variables are crucial for analyzing ecosystem behaviour and fluxes, as well as for related modeling activities globally. However, the finite lifespans of satellite missions pose significant challenges to obtaining temporally consistent observations over extensive periods. Furthermore, the evaluation of spaceborne observations in conjunction with concurrent in-situ observations and their use in modeling activities is limited by the decommissioning of platforms. We address these challenges with a deep learning approach based on a transformer-based encoder architecture. As a showcase, this architecture is trained jointly on datasets from the Moderate Resolution Imaging Spectroradiometer (MODIS) and its successor, the Visible Infrared Imaging Radiometer Suite (VIIRS). The goal is to learn a latent representation that captures the important features across both sensors. This offers the ability to analyze and predict across the entire temporal span (MODIS-only period, overlap period, and post-MODIS VIIRS period). Preliminary testing on surface reflectance demonstrates that this approach captures temporal and spatial dynamics of pixels sampled globally with high consistency. Moreover, the method is conceptually flexible, enabling adaptation to other variables such as land surface temperature and soil moisture. It is also not specific to the sensor pair, but can be adapted to other combinations, such as MODIS/VIIRS to Sentinel-3, without requiring extensive training. By enhancing the continuity and compatibility of Earth observation datasets, this approach aligns with the critical need to fuse data from satellites and ground-based networks for flux modeling. Potentially the latent representations can become predictors for ecosystem models themselves. The framework allows to match observed data with spatial and temporal variability, potentially improving the long-term representation of dynamic processes in ecosystem and carbon cycle models. Ultimately, this work lowers the barrier to use multi-sensor Earth observation datasets for long-term environmental monitoring and predictive modeling across diverse applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A New Operational Global Terrestrial Ecosystem Gross Primary Productivity (GPP) Product: The Quantum Yield (QY) GPP Product.

Authors: Booker Ogutu, Mr Finn James, Mr Sven Berendsen, Dr Stephen Plummer, Prof Jadunandan Dash
Affiliations: School of Geography and Environmental Science, University of Southampton, European Space Agency
Gross primary productivity (GPP) represents the amount of carbon dioxide (CO₂) fixed by plants through photosynthesis per unit area/time and is a key component of the global carbon cycle. Accurate quantification of GPP is critical in understanding the global carbon cycle and how terrestrial ecosystems might respond to global environmental change. Here we present a new global GPP product derived using the Quantum Yield (QY) model (QY-GPP product). The QY model calculates GPP as a product of photosynthetic pathway (i.e., C₃ and C₄) quantum yield term (α mol /mol) and time averaged fraction of photosynthetically active radiation absorbed by photosynthetic pigments in the canopy (i.e., FAPARchl) derived from Sentinel-3 OLCI data. The evaluation of the QY-GPP product across various biomes using data from two flux tower networks (i.e., Integrated Carbon Observation System-ICOS and Ameriflux flux tower networks, n=2350) showed that the QY-GPP product was close to in-situ GPP measurements across various biomes (i.e., R²= 0.72; RMSE= 3.16gC/m²/day, MAE= 2.5gC/m²/day and Bias= 1.39gC/m²/day). Additionally, when compared with two operational satellite-based GPP products (i.e., Copernicus Global Land Service Gross Dry Matter Productivity-CGLSGDMP and MOD17 GPP), the QY-GPP product explained a higher variability of the in-situ measurements at flux tower sites(i.e., QY-GPP: R²= 0.77; CGLS DMP GPP: R² = 0.74 and MOD17: R²= 0.60). The satisfactory performance of the QY-GPP product shows its potential for application in carbon cycle research (e.g., in monitoring of the dynamics of global carbon cycle) and in a broad range of applications (e.g. carbon accounting) at regional to global scale. The QY-GPP product relies on operational Sentinel 3 OLCI land product and ECMWF reanalysis products, which makes it feasible to produce at global scale operationally.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Complementing global-to-local scale terrestrial carbon-water models with Earth Observation

Authors: Sujan Koirala, Dr. Martin Jung, Hoontaek Lee, Tina Trautmann, Lazaro Alonso Silva, Bernhard Ahrens, Felix Cremer, Fabian Gans, Markus Reichstein, Nuno Carvalhais
Affiliations: Department of Biogeochemical Integration, Max Planck Institute for Biogeochemistry, Institute of Physical Geography, Goethe University
Biogeochemical processes influencing climate feedback across scales tightly link terrestrial carbon and water cycles. Yet, biosphere-climate feedback remains one of the largest uncertainties in the global terrestrial models. This points to evidences that our understanding of the coupled water-carbon processes is potentially limited by incomplete observational data and under-constrained models highlighting the need for methods integrating observational data with terrestrial ecosystem models. This study introduces the SINDBAD model-data integration (MDI) framework, which seamlessly integrates diverse observational data to constrain terrestrial models of varying complexities and scales. We demonstrate the relevance of a modular MDI workflow to test hypotheses and parameterizations across scales, and in particular, how Earth Observation (EO) data can help complement and constrain the models. At the global scale, using EO-based vegetation indexes in simpler models enhances simulations of monthly runoff and terrestrial water storage in arid regions. Such parsimonius model is also able to represent the global CO2 exchange and its relatioship with water cycle at the global scale. At the regional scale, employing vegetation fraction data from EO geostationary satellites in coupled water-carbon models significantly improves the simulation of gross primary productivity interannual variability. At the ecosystem scale, linking fluxes and states by prognostically coupling water-carbon controls on productivity and carbon allocation benefits from remote sensing EO of vegetation states, adding constraints on the carbon cycle beyond eddy covariance measurements. We can thus conclude that with appropriate data and EO constraints, an across-scale approach underpins hypothesis testing, enhancing our understanding and quantifying carbon-water interactions. However, the model-observation discrepancies, at a given scale, are quantitatively comparable to differences among observations and observation-based products. To address this, we extend the SINDBAD framework to learn representations and assumptions on process parameterizations. To do so, we combine local-scale observations of fluxes and stocks, along with local information of ecosystem functional properties, to learn the spatial variability of physical model parameters describing carbon-water dynamics. Leveraging such hybrid modeling approaches may pave way to develop physically sound process parameterizations, ultimately improving the representation of coupled carbon-water dynamics from local to global scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote Quantification of Soil Organic Carbon: Role of Topography in the Intra-field Distribution

Authors: Ben Cutting, Professor Clement Atzberger, Professor Asa Gholizadeh, Professor David A. Robinson, Doctor Jorge Mendoza-Ulloa, Professor Belen
Affiliations: University Of Surrey
Quantitative measurements of soil organic carbon (SOC) are important for monitoring soil health and for the study of land-atmosphere carbon fluxes. Traditional methods for SOC determination involve the acquisition and lab analysis of soil samples across a given study area. Despite the accuracy of these methods, they are expensive, time consuming, and require access to the area in question. One way to circumvent some of these limitations, is through remote sensing which offers the possibility to quantify SOC over large and potentially inaccessible areas, in both a periodic and cost-effective way. In recent years, numerous studies have sought to relate Earth observation spectral data to SOC content. Many of these studies consider large areas with relatively low sampling densities with the aim of achieving a varied range of soil types. While this is important, SOC has a highly varied distribution even at a crop field scale. Indeed, SOC levels can vary by over 50% within a small area of cropland (Li et al., 2024). Understanding this intra-field variability has the potential to unveil important drivers relating to SOC changes within the soil. A driver of particular import at both large and intra-field sales is topography, being closely related to the movement and accumulation of water and material across the landscape and, consequently, contributing to the SOC distribution. This moisture level can induce either aerobic or anaerobic conditions which, in turn, influences the carbon flux and so the proportion of SOC stored within the soil (Linn and Doran, 1984). Therefore, the study of topographical covariates as a driver for SOC change is of significant importance, particularly in capturing fine-scale variations and highlighting the influence of micro-topographical features at high-resolutions. This study undertook a high-density sampling campaign of SOC at three crop fields in Southeast England synchronous to a clear-sky Sentinel-2 observation of the area. In addition, a hyperspectral UAV flight and lidar survey were conducted in conjunction with the sampling campaign. These data facilitated the creation of a range of models, developed to predict SOC based on topographical features and single-day and multi-date spectral data. The importance of the different predictors was meticulously analysed at an intra-field high resolution scale. Sentinel-2 spectral data of the study fields were acquired for the exact day of the sampling campaign, and for an interval of 18 months before and after this date. Random Forest (RF) and Support Vector Regression (SVR) models were trained and tested on the spectral and topographical data to predict the observed SOC values. Five different sets of model predictors were assessed by using independently and in combination with single and multidate spectral data, and topographical features for the SOC sampling points. Both RF and SVR models performed best when trained on multitemporal Sentinel-2 data together with topographic features, achieving validation root-mean-square errors (RMSEs) of 0.293% and 0.229%, respectively (Cutting et al., 2024). These RMSEs are competitive when compared to those found in the literature for similar models. Of the input set, topographical features were found to be the most important, specifically the topographic wetness index (TWI), a parameter closely linked to the accumulation of soil water which exhibited the highest permutation importance for virtually all models. However, contrary to the positive relationship observed by Minhoni et al. (2021) in dryer climates at a similar scale, TWI was found to be negatively related to SOC levels in the study fields. This potential disagreement may suggest a different role of soil wetness in SOC storage in climatic regimens at an intra-field scale. Cutting, B. J., Atzberger, C., Gholizadeh, A., Robinson, D. A., Mendoza-Ulloa, J. & Marti-Cardona, B. 2024. Remote Quantification of Soil Organic Carbon: Role of Topography in the Intra-Field Distribution. Remote Sensing, 16, 1510. Li, W., Yang, Z., Jiang, J. & Sun, G. 2024. Spatial Variation and Stock Estimation of Soil Organic Carbon in Cropland in the Black Soil Region of Northeast China. Agronomy, 14, 2744. Linn, D. M. & Doran, J. W. 1984. Aerobic and Anaerobic Microbial Populations in No-till and Plowed Soils. Soil Science Society of America Journal, 48, 794-799. Minhoni, R. T. D. A., Scudiero, E., Zaccaria, D. & Saad, J. C. C. 2021. Multitemporal satellite imagery analysis for soil organic carbon assessment in an agricultural farm in southeastern Brazil. Science of The Total Environment, 784, 147216.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Capturing Short-Term Dynamics in ASCAT Vegetation Parameters

Authors: Paco Frantzen, Susan Steele-Dunne, Tristan Quaife, Mariette Vreugdenhil, Sebastian Hahn, Wolfgang Wagner
Affiliations: Delft University of Technology, University of Reading, Vienna University of Technology
The relationship between microwave backscatter and incidence angle, as observed by the Advanced Scatterometer (ASCAT) onboard the Metop satellites provides valuable insights into vegetation water content and structure. The so-called Dynamic Vegetation Parameters (DVP) -representing the first (slope) and second derivative (curvature) of this relationship - have been used in various studies to monitor changes in vegetation water content (Pfeil et al., 2020, Petchiappan et al., 2022), and structure (Steele-Dunne et al., 2019). Another potential use of DVP lies in their use to constrain vegetation states in land surface models (Shan et al., 2024). While DVP are largely affected by changes in vegetation water content and structure, it was found that on short time scales, they may also be affected by soil moisture and intercepted precipitation (Greimeister-Pfeil et al., 2022). Currently, DVP time series are derived using a kernel smoother that weights observations based on their temporal proximity for each day following the Epanechnikov kernel. While this approach is effective for capturing seasonal variability of DVP, this method struggles to accurately represent the timing of short-term changes in the input data because they are smoothed out. Furthermore, the smoothing of short-term changes also compromises the quality of the estimated DVP time series because short-term changes can affect the estimated DVP time series for multiple weeks and introduce adverse artifacts depending on the kernel halfwidth. It is crucial to preserve the timing of short-term variations in the estimation process so that we can disentangle the effects of various contributions to the ASCAT slope. This will allow for more accurate comparisons with independent data on soil and vegetation states. In addition, it will allow us to isolate and remove high-frequency variability due to e.g. intercepted precipitation or soil moisture in any analysis relating ASCAT slope to biomass or vegetation water content. In this study, an alternative method based on temporally constrained least squares proposed by (Quaife and Philip, 2010) is evaluated for estimating ASCAT DVP without the use of a smoothing kernel. The results show that this method better preserves the timing of short-term changes, while matching the Epanechnikov kernel in terms of aggregated validation metrics such as the unbiased root mean squared error. Ongoing research is focused on investigating the influence of this alternative approach on the estimated ASCAT slope, and its potential to improve our ability to relate ASCAT slope to independent observations of soil and vegetation states. Greimeister-Pfeil, I., Wagner, W., Quast, R., Hahn, S., Steele-Dunne, S., and Vreugdenhil, M. (2022). Analysis of short-term soil moisture effects on the ASCAT backscatter-incidence angle dependence. Science of Remote Sensing, 5:100053. Petchiappan, A., Steele-Dunne, S. C., Vreugdenhil, M., Hahn, S., Wagner, W., and Oliveira, R. (2022). The influence of vegetation water dynamics on the ASCAT backscatter–incidence angle relationship in the Amazon. Hydrology and Earth System Sciences, 26(11):2997–3019. Pfeil, I., Wagner, W., Forkel, M., Dorigo, W., and Vreugdenhil, M. (2020). Does ASCAT observe the spring reactivation in temperate deciduous broadleaf forests? Remote Sensing of Environment, 250:112042. Quaife, T. and Philip, L. (2010). Temporal Constraints on Linear BRDF Model Parameters. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 48 Shan, X., Steele-Dunne, S., Hahn, S., Wagner, W., Bonan, B., Albergel, C., Calvet, J.-C., and Ku, O. (2024). Assimilating ascat normalized backscatter and slope into the land surface model isba-a-gs using a deep neural network as the observation operator: Case studies at ismn stations in western europe. Remote Sensing of Environment, 308:114167. Steele-Dunne, S. C., Hahn, S., Wagner, W., and Vreugdenhil, M. (2019). Investigating vegetation water dynamics and drought using Metop ASCAT over the North American Grasslands. Remote Sensing of Environment, 224:219–235.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Constraining vegetation turnover rates in Terrestrial Biosphere Model using L-band backscatter

Authors: Xu Shan
Affiliations: Max Planck Institute Of Biogeochemistry
An improved representation of the carbon and water cycle dynamics in terrestrial ecosystems underpins a large uncertainty reduction in modeling Earth system dynamics. The climate sensitivity of ecosystem processes controls land-atmosphere interactions and the overall atmospheric carbon uptake and release dynamics across scales. Local and Earth observations of vegetation dynamics are key for the evaluation of our understanding and support the quantification of process representation in model development. Previous research has shown the importance in undermining equifinality using multi-variate observation constraints, focusing water and carbon fluxes and stocks. Long-wavelength radar backscatter provides unique insights into the dynamics of plant water and carbon dynamics when compared to optical EO products, as such, embeds the potential for constraining various parameters controlling local climate vegetation responses. In this study, we present an approach for assimilating Earth observation backscatter data in a terrestrial ecosystem model to improve estimates of vegetation parameters turnover rates. Among others, we focus on the information content of L-band ALOS PALSAR data in constraining vegetation dynamics at selected FLUXNET sites, where carbon and water fluxes and stocks are observed. Using a radar observation operator, a standard radiative transfer model, we design a model-data integration experiment to investigate the benefits of multiple backscatter observations versus unique above ground biomass to constrain model parameters. The experimental setup focuses on the trade-off between information content from backscatter and uncertainties from observation operators versus sparse above ground biomass observations to constrain parameters controlling leaf and wood pool dynamics in vegetation. Current results indicate that the assimilation improves the estimation of aboveground biomass and constraints on turnover rates for both foliage and woody pools. Ultimately, data sparsity and availability exert control on model performance and prior model uncertainty on parameter constraints. Ultimately, this study highlights the potential of L-band backscatter to enhance vegetation carbon cycle modeling, emphasizes the added value of the upcoming ESA BIOMASS mission, and underscores the importance of integrating vegetation water dynamics into carbon models.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improving the monitoring of vegetation and drought by land surface models through the assimilation of satellite data

Authors: Jean-Christophe Calvet, Bertrand Bonan, Yann Baehr, Timothée Corchia, Oscar Rojas-Munoz, Pierre Vanderbecken, Jasmin Vural
Affiliations: Meteo-france, CNRS, CNES
Severe droughts become more intense and widespread. Vegetation fires are being observed in regions where such events have never occurred before. Clay shrinkage is causing increasing damage to houses. Current climate trends have reached a critical level where the tools of climate science and everyday disaster monitoring need to converge. While this study does not directly investigate tipping points, the tools and products being developed have the potential to better investigate and monitor tipping points. Land data assimilation aims to monitor the evolution of soil and vegetation variables. These variables are driven by climatic conditions and anthropogenic factors such as agricultural practices. Land surface monitoring includes a number of variables of the soil-vegetation system such as land cover, snow depth, surface albedo, soil water content and leaf area index (LAI). These variables can be monitored by integrating satellite observations into models through data assimilation. Monitoring land variables is particularly important in a changing climate, as unprecedented environmental conditions and trends emerge. Unlike atmospheric variables, land variables are not chaotic per se, but rapid and complex processes affecting the land carbon budget, such as forest management (thinning, deforestation, ...), forest fires and agricultural practices, are not easily predictable with good temporal precision. They cannot be accurately monitored without integrating observations as they become available. Because data assimilation is able to balance information from contrasting sources and account for their uncertainties, it can produce an analysis of variables that is the best possible estimate. Data assimilation can involve several techniques, such as 'model parameter tuning', variational assimilation or sequential Kalman filtering methods. The latter are used in meteorology and in some land modelling frameworks to improve initial conditions (e.g. root zone soil moisture) at a given time. New research is being undertaken to assess the impact of improving vegetation initial conditions, as vegetation has a memory of past environmental conditions, as does soil moisture. Vegetation variables such as LAI control the amount of evapotranspiration and their initial conditions have a predictive capability. Examples are given of how data assimilation can be implemented on a global scale by regularly updating model state variables through a sequential assimilation approach. The focus is on LAI assimilation and the use of machine learning techniques to build observation operators that allow direct assimilation of new vegetation sensitive observations such as microwave backscatter and brightness temperature or solar induced fluorescence. We show that the analysis of LAI together with root zone soil moisture is necessary to monitor the effects of irrigation, drought and heat waves on vegetation, and that LAI can be predicted after proper initialisation. We also show that machine learning can be used to derive new variables (e.g. surface albedo, vegetation moisture) from those already calculated by the land surface model. This paves the way for new developments such as more interactive assimilation of land variables into numerical weather prediction and seasonal forecasting models, as well as atmospheric chemistry models. Examples of CO2MVS applications will be presented. These results can be extrapolated to the monitoring of vegetation fire danger at different spatial resolutions. The latter will be developed in the framework of the Green Deal governance action GreenEO (2025-2029).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Quantifying the spatio-temporal heterogeneity around eddy-covariance towers to improve upscaling with remote sensing

Authors: Daniel E. Pabon-Moreno, Arianna Lucarini, Dr. Jacob A. Nelson, Giacomo Nicolini, Luca Di Fiore, Dario Papale, Gregory Duveiller
Affiliations: Max Planck Institute For Biogeochemistry, Department of Agricultural Science, University of Sassari, Department of Climate Change and Sustainable Development, School for Advanced Studies IUSS, Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici, University of Tuscia
The eddy covariance (EC) technique is commonly used to measure the exchange of energy and matter between the biosphere and the atmosphere. EC measurement towers are located around the globe covering different ecosystems types and climate and disturbance/management regimes. In the last decades, EC data have been used in combination with satellite imagery and climate products to train machine learning techniques and predict ecosystem fluxes at global scale. One of the uncertainty sources that is often underestimated when performing upscaling exercises is the matching degree between the observational footprints of the EC tower measurements and those of the satellite sensor. Depending on the spatial resolution of the satellite sensor and the characteristics of the EC tower, the mismatch with between what is measured by the tower can potentially bias the upscaled estimation of carbon fluxes at regional and global scale. In the present study, we use images from the Sentinel-2 satellites at 20 meters resolution to assess the spatio-temporal heterogeneity of the landscape around the EC towers of the FLUXNET network. To quantify their mismatch, we use the Jensen-Shannon distance (JS) that measures the amount of information shared between two probability distributions. We compute the Jensen-Shannon distance for different concentric areas around the EC tower, representing approximations of either the climatological footprint of the EC measurements, or those of satellite measurements with increasingly coarser spatial resolutions. We found that when considering a 70% threshold of shared information between the EC tower and the satellite resolution, only half of the FLUXNET sites are suitable to perform upscaling exercises. We also found that most FLUXNET sites show 5%-10% temporal variability in similarity between tower and satellite footprint, considering an EC footprint of 250 meters radius around the tower and a satellite resolution of 500 meters radius around the tower. Finally, we discuss what is the potential bias introduced by the mismatch when using machine learning techniques to upscale Gross Primary Production (GPP), and how to correct this bias.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Optimizing Data for a Spatially Explicit Forest Carbon Model for the EU: A Case Study of Finland

Authors: Gonzalo Oton Azofra, Viorel Blujdea, Roberto Pilli, Mirco Migliavacca, Giacomo Grassi
Affiliations: European Commission, Joint Research Centre (JRC), Independent researcher providing service to the Joint Research Centre, European Commission
The European Union is committed to becoming the first continent to achieve carbon neutrality by 2050, with net-zero greenhouse gas emissions as outlined in the European Green Deal and the European Climate Law. This commitment is in line with global efforts to combat climate change under the Paris Agreement. Achieving neutrality will require robust technological and ecosystemic carbon sinks in the coming years. The IPCC has outlined methodologies for monitoring and reporting forest carbon stocks and changes, essential for offsetting anthropogenic emissions. To evaluate the pathways to 2050 target, reliable models are needed. The models must capture forestry processes both in the short term and for intermediate targets, such as 2030 and forthcoming 2040, as well. To accurately monitor carbon dynamics and report on forest carbon balance, the Carbon Budget Model developed for the Canadian Forest Sector (CBM-CFS3) has been adapted for European forests, resulting in the EU-CBM-HAT. This model provides country- and regional-scale insights on forestry indicators and carbon dynamics given forest management scenarios. Building upon the CBM, the Canadian team has developed a workflow for applying the CBM in a spatially explicit manner through the Generic Carbon Budget Model (GCBM). This scalable model offers enhanced granularity at the pixel level and producing a time series of spatial forest carbon indicators. Currently, a prototype of the GCBM model is being tested in Europe, with Finland serving as a case study, representing an initial step toward its pan-European application. The development encompasses three major phases: a) selection of the optimal data source for initialization of standing volume/biomass/age, and species distribution, with data obtained via remote sensing; b) the implementation of stands growth based on data generally derived from ground data from National Forest Inventories; and c) the incorporation of the silvicultural practices and natural disturbances, such as wildfires and windstorms, utilizing remote sensing data. The model also incorporates climate data and administrative/ecological factors. The output comprises time series maps at a 100-meter resolution, offering a robust framework for understanding and managing forest carbon dynamics and thereby enhancing the decision-making capabilities of stakeholders and forest managers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards a multidecadal record of above ground biomass from active and passive microwave observations

Authors: Samuel Favrichon, Maurizio Santoro, Oliver Cartus, Catherine Prigent, Carlos Jimenez
Affiliations: Gamma Remote Sensing Ag, LERMA, Observatoire de Paris, Estellus
Global vegetation plays a critical role in the global carbon budget, storing the largest proportion of terrestrial carbon. Human activities and changes to the global climate affect the state of global forest through regional extent decrease but also anthropogenic or natural vegetation growth. Significant uncertainties remain in the relative contributions of regional carbon sinks. Enhancing estimates of the carbon fluxes is essential for improving climate modeling and informing policymakers more effectively. While changes in above ground biomass (AGB) stocks can be reliably obtained from inventory data, they can be complemented by consistent remote sensing based estimates of vegetation dynamics on a global scale over multiple decades. Satellite data records are now available over 40 years, with microwave based remote sensing providing unique abilities for continuous terrestrial surfaces observation. Microwave remote sensing below 36 GHz, both passive and active, provides sensitivity to the vegetation, with canopy penetration and atmospheric transparency increasing with decreasing wavelength. In addition, passive microwave datasets can provide daily global coverage. However, multiple instruments are required to achieve multi decadal record lengths. These instrument changes and observation specificities must be addressed to create homogeneous long-term time series. This study examines instruments data records consistency, and strategies to correct discontinuities across sensors. Methods to combine multiple sources of measurements to achieve the best estimation of AGB on a global scale are also presented. For active sensors, intercalibration was performed between instrument types and frequencies (Tao et al. 2022). Here, we show how the overpassing times differences of the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave - Imager (SSM/I) and Special Sensor Microwave Imager Sounder (SSMIS) (Fennig et al. 2020) observations at 18 and 36GHz can be corrected to generate a 30+ year data record of passive microwave observations. Using as reference the CCI Biomass dataset of Above Ground Biomass (Santoro et al. 2023), we combine active and passive observations to estimate AGB at a coarse resolution (~12.5 km) using a method tested during the ESA BiomAP project (Integrating Active and Passive microwave data towards a novel global record of above ground biomass maps) (Prigent et al. 2022). The resulting biomass estimation have an R2>0.85 compared to the reference dataset. The model can then be applied to all available sensor combinations and further analyzed to evaluate limitations and uncertainties in biomass retrieval. The resulting time series offer insights into large-scale vegetation growth and biomass decline since 1992, providing a basis for comparison with other estimates of global AGB variations, such as the ones derived from models or plot data. [1] Karsten Fennig, Marc Schroder, Axel Andersson, and Rainer Hollmann. A fundamental climate data record of smmr, ssm/i, and ssmis brightness temperatures. Earth System Science Data, 12(1):647–681, 2020. [2] M Santoro and O Cartus. Esa biomass climate change initiative (biomass cci): Global datasets of forest above-ground biomass for the years 2010 2017 2018 2019 and 2020. NERC EDS Centre for Environmental Data Analysis, 2023. [3] Shengli Tao, Zurui Ao, Jean-Pierre Wigneron, Sassan Saatchi, Philippe Ciais, Jerome Chave, Thuy Le Toan, Pierre-Louis Frison, Xiaomei Hu, Chi Chen, et al. C-band scatterometer (cscat): the first global long-term satellite radar backscatter data set with a c-band signal dynamic. Earth System Science Data Discussions, 2022:1–30, 2022 [4] Prigent, Catherine, and Carlos Jimenez. "An evaluation of the synergy of satellite passive microwave observations between 1.4 and 36 GHz, for vegetation characterization over the Tropics." Remote Sensing of Environment 257 (2021): 112346
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing the impacts of recent European droughts on terrestrial vegetation gross primary productivity (GPP) using the Quantum Yield (QY) GPP Product.

Authors: Finn James, Dr Booker Ogutu, Sven Berendsen, Claire Miller, Daria Andrievskaia, Mahmoud El Hajj, Dr Stephen Plummer, Professor Jadunandan Dash
Affiliations: University Of Southampton, NOVELTIS, European Space Agency
Through climate change, the frequency and severity of future droughts in Europe are expected to increase. Understanding the impacts of drought on vegetation, and thereby its ability to offset carbon dioxide (CO2) emissions, is thus crucial in mitigating further warming. A key indicator of vegetation productivity, and its efficiency to sequester carbon, is gross primary productivity (GPP), describing the carbon fixed into (and subsequently stored within) vegetation biomass. Here, we utilised GPP data derived from the quantum yield (QY) model to analyse the effect of recent (2018-2020) European droughts on vegetation, at a 500m spatial resolution. Additionally, we investigated the impact of drought seasonality (specifically, whether drought occurred in spring, summer or both seasons) on GPP, as well as whether impacts varied among different landcover classes (Rainfed Croplands; Deciduous Broadleaf Forests; Evergreen Needleleaf Forests; Mixed Forests; Grasslands). Our results show that spring droughts led to the largest overall reduction in GPP at -22.5%, versus only -3.3% under summer drought, and -17.7% under consecutive spring and summer droughts. This pattern was especially true in Northern Europe, with Southern Europe exhibiting greater impact of summer drought than spring drought in reducing GPP. Furthermore, these trends were observed across all landcover types – Rainfed Croplands and Grasslands were especially affected by spring drought, showing reductions in GPP of around 27%. All landcover classes showed a decrease in GPP by at least -12% under spring droughts, though the highest reductions for summer and combined spring-summer droughts were only -3% (for Deciduous Broadleaf Forest and Evergreen Needleleaf Forest) and -8% (Evergreen Needleleaf Forest) respectively. Subsequently, whilst prior research has shown that warm springs may increase GPP, our results suggest that a combination of warm springs and spring drought may yield the largest negative impacts on GPP. Such insights could be beneficial both for mitigating the impact of drought during the growing season, and for anticipating likely disturbances to carbon sequestration.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Verification of Terrestrial Carbon Sinks with the Terrestrial Carbon Community Assimilation System (TCCAS)

Authors: Thomas Kaminski, Wolfgang Knorr, Michael Voßbeck, Mathew Williams, Timothy Green, Dr Luke Smallman, Marko Scholze, Tristan Quaife, Tea Thum, Sönke Zaehle, Peter Rayner, Susan Steele-Dunne, Mariette Vreugdenhil, Mika Aurela, Alexandre Bouvet, Emanuel Bueechi, Wouter Dorigo, Tarek El-Madany, Mariko Honkanen, Yann Kerr, Anna Kontu, Dr. Juha Lemmetyinen, Hannakaisa Lindqvist, Dr Arnaud Mialon, Tuuli Miinalainen, Amanda Ojasalo, Shaun Quegan, Pablo Reyez Muñoz, Dr Nemesio Rodriguez-Fernandez, Mike Schwank, Jochem Verrelst, Songyan Zhu, Matthias Drusch, Dirk Schüttemeyer
Affiliations: The Inversion Lab, University of Edinburgh, University of Lund, University of Reading, Finnish Meteorological Institute, Max-Planck-Institute for Biogeochemistry, TU Delft, TU Wien, CESBIO, University of Sheffield, University of Valencia, Swiss Federal Institute for Forest, Snow and Landscape Research, University of Southampton, European Space Agency
The Paris agreement allows the use of terrestrial carbon sinks as a climate mitigation mechanism. In this context, accurate quantification of such sinks is highly relevant. Ideally this quantification combines process understanding incorporated in a terrestrial biosphere model with a range of observations that constrain the model simulation. To tackle this task we employ the Terrestrial Carbon Community Assimilation System (TCCAS, https://tccas.inversion-lab.com/), a development funded by the European Space Agency within its Carbon Science Cluster. TCCAS is constructed around the newly developed D&B terrestrial biosphere community model (https://doi.org/10.5194/egusphere-2024-1534). D&B builds on the strengths of its two component models, DALEC and BETHY, in that it combines the dynamic simulation of the carbon pools and canopy phenology of DALEC with the dynamic simulation of water pools, and the canopy model of photosynthesis and energy balance of BETHY. Both component models have a long track-record of successful data assimilation applications. TCCAS includes a suite of dedicated observation operators that allows the simulation of solar-induced fluorescence (SIF), fraction of absorbed photosynthetically active radiation (FAPAR), vegetation optical depth from passive microwave sensors, and surface layer soil moisture. The model is embedded into a variational assimilation system that adjusts a control vector to match the observational data streams. For this purpose TCCAS is provided with efficient tangent and adjoint code. The control vector consists of a combination of initial pool sizes and process parameters in the core model and in the observation operators. One of the main target quantities in the context of the Paris Agreement is the simulated long-term carbon uptake. The accuracy of this quantity depends on several factors including the combination of observational data streams that was assimilated by TCCAS and on the model error, i.e. the model's capability to accurately simulate these data streams. We derive a specification of that model error. Based on this specification we quantify the capability of several combinations of remote sensing and in situ data streams (SIF, soil moisture, vegetation optical depth, FAPAR, and biomass) to constrain the simulated terrestrial carbon uptake. In this context we analyse the role of the observation operators and discuss observational strategies for sink quantification including upcoming space missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: F.04.03 - POSTER - Desertification, land degradation and soil management

Desertification and land degradation poses a major threat to food security, ecosystem services and biodiversity conservation. Soil is not a renewable resource when viewed on a time scale of a couple of decades and it is threatened worldwide by climate change, natural hazards and human activities. The consequences are an increased soil loss due to wind and water erosion, land-slides, reduced soil quality due to organic matter loss, contamination and soil sealing. The EU Soil monitoring law on the protection and preservation of soils aims to address key soil threats by sustainable soil use, preservation of soil quality and functions. Space-based earth observation data together with in-situ measurements and modelling can be used in an operational manner by national and international organizations with the mandate to map, monitor and repot on soils. With the advent of operational EO systems with a free and open data policy as well as cloud-based access and processing capabilities the need for systematic, large area mapping of topsoil characteristics with high spatial resolution that goes beyond recording degradation processes can be addressed

We encourage submissions related to the following topics and beyond:
- Advanced earth observation-based products to monitor desertification and land degradation at a large scale
- Specific earth observation-based methods for soil related topics such as soil parameter mapping. Soil erosion mapping as well as other soil related health indicators in different pedo-climatic regions and biomes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Land Degradation Mapping and Change Assessment for SDG 15.3.1 in the Nigeria Guinea Savannah

Authors: Ademola Adenle, Nikhil Raghuvansh, Felicia O. Akinyemi, Olena Dubovyk
Affiliations: Department of Geography, University of Bergen, Norway, Department of Geography, Federal University of Technology, Department of Earth System Sciences, University of Hamburg Germany, Geomatics, Department of Environmental and Life Sciences, Karlstad University, Universitetsgatan 2, 651 88 Karlstad, Sweden
Land degradation is one of the leading global problems undermining progress towards achieving Sustainable Development Goals (SDGs) in Sub-Saharan Africa. However, the absence of updated comprehensive national assessment, monitoring, and reporting of land degradation is a recurring problem among developing countries owing to scientific, technical, and political challenges. Over the recent decade, there is a growing recognition of the severe impact of land degradation on the existence of millions of lives, livelihoods and landscapes in Nigeria, particularly in the Nigeria Guinea Savannah (NGS). This region is not only a critical biodiversity ecoregion but also the country’s most ethnically diverse area and a significant agricultural zone. Based on available national data, this study assesses the condition of three land degradation indicators namely: land cover (LC), land productivity (LP), and soil organic carbon stocks (SOC). The analysis compared the analytical effectiveness and practice of default method (DM) and adopted method (AM) for both baseline period (BP) 2000-2013 and monitoring period (MP) 2013-2022 to support the global land goals. The DM employs medium-resolution data and semi-automated open-source Trends.Earth techniques, while the AM uses improved methods involving Landsat data, following recent best-practice guidance (a new standard methodology). Our preliminary results from LULC indicator show that 0.70% of the NGS is degraded during the BP and 0.45% in MP for the DM, while 9.34% (BP) and 10.63% (MP) were degraded respectively in the AM, with grassland experiencing the most change. According to DM, degraded areas due to land productivity indicators experienced a decline from 27.44%(BP) to 38.42% (MP), and a similar decline is anticipated with the AM. For the SOC indicator, 1.42% (BP) and 38.99% (MP) of the NGS are degraded based on the DM. It is important to note that this analysis is preliminary and ongoing. Therefore, the outcomes presented are yet to depict the results of the AM in detail. However, in support of the national land degradation (neutrality) inventory process, preliminary DM findings show that during the BP, 24.22% of the NGS improved, 46.48% remained stable, and 28.30% was degraded. In the MP, these values shifted to 10.54% improved, 49.47% stable, and 38.99% degraded. As demonstrated in several studies, the DM analysis is expected to serve as a reference for the AM results after the final analysis. While this comparative study highlights inadequacies in the DM, it also underscores the improved capacity of the AM to delineate, characterize, and contextualize land degradation effectively. These findings aim to enhance spatial planning, inform environmental policies, and enable sustainable land management initiatives to support SDG 15.3.1 within the NGS.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Spatio-Temporal Monitoring of Vegetation Structure and Surface Moisture in Kruger National Park and the Overberg District in South Africa From Sentinel-1 and -2 Time-Series Since 2015

Authors: Christiane Schmullius, Marco Wolsza, Tercia Strydom, Jussi Baade
Affiliations: University Jena, South African National Parks
Land degradation can be defined as a persistent reduction or loss of the biological and economic productivity resulting from climatic variations and human activities. To quantify relevant surface changes with Earth observation sensors requires a rigorous definition of the observables and an understanding of their seasonal and inter-annual temporal dynamics as well as of the respective spatial characteristics. This talk illustrates operational mapping possibilities with the European Sentinel satellite fleet, that guarantee high-resolution spatial, spectral and temporal monitoring since 2015 and until 2040. Synergistic retrieval of innovative land surface indices is demonstrated with focus on Kruger National Park and the Overberg district, both in South Africa. A joint EO and in situ strategy for management needs will be outlined. The need for analysis-ready data (ARD) in the optical and especially radar domain has been recognized and formerly complex information is increasingly easy to be used and applied (e.g. through companies such as synergise and Google Earth Engine). Various processing tools are accessible without cost for large datasets (e.g. PyroSAR, SNAP). In this work, data cubes have been established and Jupyter Notebooks generated, which contain a portfolio of Python scripts for production of various vegetation indices, bare soil maps, vegetation height, woody cover and surface moisture. The data cubes allow to exploit the synergy of radar and optical remote sensing data over meanwhile seven years. The dense time series reveal intra- and inter-annual variations of unexpected land surface phenoma. To evaluate Sentinel-1 time series to detect surface changes, irregularities in the radar backscatter and coherence time series were analysed. The radar products exhibit more contrast between areas of high and low woody cover in flat terrain, but is still more affected by topography despite radiometric corrections. Sentinel-2 better detects differences in mountainous areas. Machine learning applications help to analyze the big amount of data, but training and accuracy assessments require also in-situ data and feedback from local experts. Therefore, we are using own soil moisture measurements in each region and interaction with regional scientists and stake holders for interpretation and validation. This assembly of in-situ and Earth observation products represent a treasure case for evidence-based climate-change studies and a new spatio-temporal EO monitoring strategy for land degradation detection in Southern Africa is being presented.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mitigating the Global Crisis of Chromium Pollution

Authors: Chandra Prakash Jha
Affiliations: Fashion For Biodiversity S.r.l.
Problem statement: As a global community, we face a mounting environmental and public health crisis. Desertification and land degradation are intensifying environmental challenges exacerbated by industrial pollution. Chromium, particularly its hexavalent form [Cr(VI)], is a highly toxic industrial byproduct predominantly released by leather tanning, metal plating, and mining industries. With the global demand for leather goods and industrial metals surging, chromium pollution has reached alarming levels, threatening human health, biodiversity, and sustainable development. Statistical data underscores the scale of this crisis. According to the United Nations Industrial Development Organization (UNIDO), approximately 90% of the world’s leather is processed using chromium salts, generating over 6.5 million tons of chromium-laden waste annually. In India alone, where Kanpur’s leather industry is a hub, an estimated 2,000-3,000 tons of chromium are discharged annually into the Ganges River, contaminating water sources for millions. The European Union highlights Italy’s Veneto and Spain’s Igualada as chromium pollution hotspots, with soil and water Cr(VI) levels often exceeding permissible limits by thousands of times, breaching the WHO guideline for drinking water (0.05 mg/L). Contaminated soils lose fertility and biodiversity, accelerating desertification in semi-arid regions. Chromium leaches into groundwater, making it unsafe for drinking and agriculture. WHO estimates a 50% cancer risk increase for populations consuming Cr(VI)-contaminated water. Aquatic ecosystems suffer too, with chromium bioaccumulating in fish and disrupting food chains. Policy frameworks, such as the European Union’s REACH Regulation and India’s Central Pollution Control Board standards, have established guidelines to mitigate chromium discharge. However, enforcement remains inconsistent, and remediation efforts are often cost-prohibitive. In this article, we examine the scale and implications of chromium pollution in the EU and India, highlight relevant policies, and present the ChromEX program by Fashion For Biodiversity as an innovative solution. ChromEX employs a triangulated remediation strategy integrating hyperspectral spatial data analysis, drone surveillance, IoT technologies, and microbiome-based bioremediation, demonstrated successfully in its Kanpur pilot project. EU and global policies addressing chromium pollution supporting soil management, land restoration, and combating desertification: 1. European Union Regulations: • REACH Regulation classifies hexavalent chromium as a "substance of very high concern" • European Green Deal aims for zero pollution • Soil Strategy for 2030 focuses on restoring polluted lands • Water Framework Directive sets stringent contamination limits 2. International Agreements: • UN Sustainable Development Goals (SDGs) • UN Convention to Combat Desertification (UNCCD) • Basel Convention • Stockholm Convention Despite these comprehensive policy frameworks, enforcement remains inconsistent, and remediation efforts are often prohibitively expensive. The ChromEX Program: An Innovative Solution The ChromEX programme by Fashion For Biodiversity leverages cutting-edge technology to combat chromium contamination in soil and water. Step 1: Identification of Hotspots: Satellites map contaminated areas based on spectral data. Step 2: Targeted Remediation: Results guide the placement of biochar-bacteria reactive zones, microbial treatments, and phytoremediation plants. Step 3: Post-Remediation Monitoring: Satellite data tracks improvements in soil and water quality, vegetation recovery, and reductions in chromium concentrations. Utilizing hyperspectral spatial data, hyperspectral drones, and IoT sensors, it identifies contamination hotspots with precision, analysing soil and water properties for chromium toxicity. Contamination detection and impact monitoring via hyperspectral satellites In the ChromEX program, hyperspectral satellites are used as the first step in the remediation process. Hyperspectral satellites play a pivotal role by providing advanced, detailed spectral data across hundreds of narrow wavelength bands, hyperspectral satellites can identify, map, and track pollution hotspots with exceptional precision, supporting the detection and remediation of toxic hexavalent chromium [Cr(VI)]. It can also provide large-scale monitoring capabilities for chromium contamination in soil and water. Lets study them in detail. 1. Detection of Chromium Contamination Hotspots Hyperspectral satellites, such as Sentinel-2 (part of the EU’s Copernicus program) and Germany’s EnMAP (Environmental Mapping and Analysis Program), detect subtle spectral changes in the environment caused by chromium contamination. 1.1. Spectral Signatures of Chromium: Chromium, particularly in its hexavalent form, alters the reflectance of soil, water, and vegetation. These changes occur in specific wavelength ranges, especially in the visible (400–700 nm), near-infrared (700–1400 nm), and shortwave infrared (1400–2500 nm) regions. 1.2. Soil Contamination: Contaminated soils exhibit unique spectral patterns due to chromium's interaction with minerals and organic matter, enabling the identification of polluted areas. 1.3. Water Pollution: Chromium affects the turbidity and chemical composition of water, which is detectable through absorption features in hyperspectral data. 2. Monitoring Vegetation Stress Chromium contamination disrupts plant physiology by affecting chlorophyll levels, water absorption, and nutrient uptake. Hyperspectral satellites can detect: 2.1. Chlorophyll Degradation: Reduced chlorophyll content shifts spectral reflectance in the visible and near-infrared regions. 2.2. Water Stress: Changes in water content within vegetation due to chromium toxicity are visible in the shortwave infrared bands. 2.3. Early Detection: These changes are detectable before visible symptoms appear, allowing for timely intervention in contaminated agricultural areas. 3. Spatial and Temporal Coverage Hyperspectral satellites provide comprehensive data on contamination over vast areas, enabling macro-level analysis of chromium pollution. 3.1. Wide Area Coverage: Satellites can monitor entire industrial regions, such as Kanpur in India or Veneto in Italy, where chromium contamination is widespread. 3.2. Temporal Monitoring: Regular flyovers ensure continuous observation, allowing for the tracking of contamination trends and the effectiveness of remediation efforts. 4. Guiding and Optimizing Remediation Hyperspectral data informs decision-making in the remediation process by: 4.1. Mapping Contamination Pathways: Identifying how chromium spreads through soil and water, helping prioritize critical areas for bioremediation. 4.2. Evaluating Remediation Success: Post-remediation, hyperspectral satellites track changes in soil and water properties, verifying reductions in chromium levels and improvements in vegetation health. Advantages of Hyperspectral Satellites in Chromium Monitoring Non-Invasive and Scalable: Satellites monitor chromium contamination without disturbing ecosystems, covering vast areas cost-effectively. High Precision: Their ability to detect subtle spectral changes allows for the identification of chromium even in low concentrations. Long-Term Observation: Satellites support temporal analysis, making it possible to assess both contamination progression and remediation effectiveness over time. Global Accessibility: Data from hyperspectral satellites like Sentinel-2 is publicly available, fostering collaboration among researchers and policymakers. IoT sensors play a vital role in the ChromEX triangulation strategy by real-time monitoring of contamination, providing early warnings with localized precision. It is also integrated with the ChromEX network to monitor bioremediation effectiveness in real time. Similarly, Hyperspectral drones are integral to the ChromEX program, providing high-resolution, localized data for detecting and monitoring chromium contamination in soil, water, and vegetation. These drones bridge the gap between satellite observations and ground-level measurements, offering unmatched precision and flexibility in targeting contamination hotspots. Chromium mitigation process: At the core of this process is E-MicroBiome bioremediation, where engineered microbial consortia like Pseudomonas, Bacillus, and Cellulosimicrobium funkei drive the bio-reduction of Cr(VI). These microbes convert Cr(VI) into Cr(III) through mechanisms such as biosorption (binding chromium ions to microbial cell walls), enzymatic reduction, and efflux systems that detoxify chromium efficiently. Tailored to site-specific conditions like pH and temperature, these microbes maintain high efficacy even in challenging environments. In addition, the Biochar-Bacteria Reactive Zone enhances chromium mitigation by combining biochar and microbial activity. Biochar provides a high-surface-area substrate that immobilizes Cr(VI) while supporting microbial colonization. Installed as reactive barriers at contamination pathways, this zone ensures the adsorption of Cr(VI) and facilitates its microbial reduction, effectively containing chromium migration. Finally, phytoremediation complements microbial remediation. Plants like vetiver grass (Chrysopogon zizanioides) are used for phytoextraction, absorbing chromium from soil and water. This process is enhanced by myco-assisted remediation, where fungi mobilize chromium for plant uptake through organic acid production, significantly improving extraction efficiency. Together, these interconnected strategies enable ChromEX to sustainably detoxify chromium, restore ecosystem health, and prevent further contamination of soil and water. Pilot Project in Kanpur, India The European Union's strict environmental regulations in the 1990s compelled the leather tanning industry to relocate to Kanpur, India, leveraging lenient laws, lower costs, and abundant labor despite environmental concerns. In October 2023, ChromEX launched a pilot in Rania, Kanpur. Key actions included: Hotspot Detection: Satellite and drone data identified severely contaminated areas. Bioremediation Implementation: Chromium-absorbing plants (Brassica juncea) were integrated with biochar and bacterial consortia. Outcome: By September 2024, chromium levels reduced by 85%, improving water quality and agricultural productivity. Conclusion Chromium pollution threatens health, ecosystems, and land productivity. Programs like ChromEX show that EO Spatial data, and bioremediation can mitigate impacts. Global collaboration is vital to restore lands, combat desertification, and ensure sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Addressing land degradation and desertification: from LIFE NewLife4Drylands to HE MONALISA project

Authors: Nicola Riitano, Daniela Smiraglia, Cristina Tarantino, Giovanna Seddaiu, Paolo Mazzetti, Francesca Assennato
Affiliations: Italian Institute for Environmental Protection and Research (ISPRA), Institute of Atmospheric Pollution Research (IIA), National Research Council (CNR), University of Sassari
Addressing the complex issue of land degradation and desertification (LDD) in Mediterranean drylands areas facing increasing climatic pressure and limited adaptive capacity, is a primary challenge for research and policy making. To find customized, scalable solutions that support sustainable land management and ecosystem restoration, solutions should navigate various socioeconomic, environmental, and cultural constraints, closely with the EU's Sustainable Development Agenda. In particular SDG 15 emphasizes the need to "protect, restore, and promote sustainable use of terrestrial ecosystems, combat desertification, and halt and reverse land degradation and biodiversity loss”. A preparatory LIFE project called Newlife4drylands (NL4DL) (https://www.newlife4drylands.eu/), ended in 2024, developed a protocol based on the use of satellite Remote Sensing (RS) data and techniques for the identification of a framework for combating LDD in protected areas by adopting Nature-Based Solutions (NBS) and for the mid and long-term monitoring of LDD status to evaluate restoration effectiveness. Six pilot sites in southern European Mediterranean countries were examined using field data and satellite observations to develop replicable methodologies. SDG 15.3.1 indicator “Proportion of land that is degraded over total land area” was computed for each site by considering additional sub-indicators, related to the specific pressures and threats on the site, and produced at local scale as much as possible to obtain a more useful support to the decision-making process. NL4DL results enhanced the key role of multiple sources integration and combination of top-down and bottom-up approaches for effective interventions against LDD in Mediterranean drylands together with the need for harmonization and standardization of ecological indices/indicators derived from satellite data. Gathering the legacy of NL4DL, the Innovation Action MONALISA Horizon Europe project (https://monalisa4land.eu/) funded under the EU Soil Mission, started on September 2024 and it will end in August 2028, aims to taking over with a broader continental study on soil degradation and sensitivity to desertification. New indicators tested at continental scale for LDD status monitoring will be, then, applied to case studies scale coupled with the integration of environmental and socio-economic data with local actions through stakeholders’ collaboration and cutting-edge digital tools. Six case studies in Italy, Spain, Greece, Tunisia and Palestine strategically located across a gradient of aridity and socio-ecological conditions in the Mediterranean drylands will host innovative agro-pastoral - water management - natural ecosystems restoration solutions to develop predictive models and scenario analysis and a decision support system for the scaling out of the solutions for reverse and preventing LDD. MONALISA addresses the “last mile” challenge for integrating scientific knowledge, local practices, advanced digital systems, artificial intelligence, and remote sensing technologies, fostering collaboration between researchers, policymakers, and land managers to ensure the adoption and scalability of solutions. A methodological framework for assessing and monitoring LDD risk across Europe and Mediterranean regions and a standardize data collection method, while enhancing soil productivity across diverse land uses, including agriculture and agroforestry, will support findings from local applications to inform broader continental strategies that adapt successful practices to diverse pedo-climatic conditions. MONALISA promotes regional creativity and interdisciplinary research, echoing the 2030 Agenda's paragraph 33, which call for partnerships and integrated approaches to sustainable land management. Through its case study locations, MONALISA supports efforts to reverse LDD and enhance resilience by endorsing the EU's "Soil Deal for Europe" initiative. This integrated strategy emphasizes large-scale monitoring combined with localized testing of methods—key steps in effectively addressing soil degradation. Such an approach not only deepens our understanding of environmental dynamics but also facilitates the implementation of practical solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High Resolution Spectral and Statistical Information About Soils In Europe – Products, Applicability and Free Data Access

Authors: Dr. Uta Heiden, Dr. Pablo d'Angelo, Mr. Paul Karlshoefer, Dr. André Twele, Dr. Fenny van Egmond, Dr. Laura Poggio
Affiliations: DLR Oberpfaffenhofen, ISRIC - World Soil Information
Based on a recent study of JRC and EEA, about 63% of the European soils are degraded (Arias-Navarro et al., 2024). Therefore, the European Union’s Soil Strategy 2030 and Horizon Europe’s Mission “A Soil Deal for Europe” has been developed. Furthermore, the European Commission intensifies efforts to develop reliable and robust soil monitoring strategies. There is a high demand for information about chemical and physical properties of European soils, properties that can reflect the consequences of our intensive use of the natural resource soil. Earth Observation (EO) is a valuable data source for large scale information about the Earth surface. In the CUP4Soil project, EO-based and European-wide soil information has been developed with the objective to provide it to a large user community and to prepare the ground for the extension of the Copernicus Land Monitoring Service by soil related information. In this presentation, the SoilSuite, a collection of different image data products for Europe, is presented. It leverages the Sentinel-2 data archive that is assimilated by DLR’s Soil Composite Mapping Processor (SCMaP), a specific processing chain for detecting and analyzing bare soil surfaces on a large (continental) scale. The methodology behind SCMaP is constantly enhanced using a technique that allows to control the variation of the different processing parameters or concepts (presented by a LPS submission from Karlshöfer et al.). We present a variety of pixel-based spectral information products of bare soils. The “Bare Surface Reflectance Composite – Mean” shows distinct spectral properties of soils due to varying soil organic carbon (SOC) content, soil moisture and soil minerology. Additionally, the product “Bare Surface Reflectance Composite - Standard deviation” informs about the spectral dynamic of soils that is an rarely used characteristic. An important milestone is the provision of a data product to inform users about the reliability of the spectral information. Thus, we developed the “Bare Surface Reflectance Composite – 95% Confidence” product. It contains the half width of the 95% confidence interval (CI) of the “Bare Surface Reflectance Composite”. Finally, the “Bare Surface Frequency Product” is scaled between 0 and 1 and quantifies the number of bare soil occurrences over the total number of valid observations. It highlights areas with limited soil coverage by e.g. vegetation. It is a generic product that can be used to identify areas that have been prone to soil erosion. Further, the “Bare Surface Frequency” can be used to better distinguish between areas with carbon farming and those with conventional agriculture. All products are used for the SOC map of the WorldSoils project as well as for the mapping of chemical/physical soil properties and uncertainties developed during the CUP4SOIL project (presented by a LPS submission from Poggio et al.). We further explored the suitability of the “Bare Surface Frequency Product” for the assessment of potential soil erosion source areas of river basins using the variety of products of HYDROSHEDS (HYDROSHEDS, access: 11/2024). Finally, it is important to enable the access of the SoilSuite on free and open access conditions to allow users to test and experience the data by their own. The direct EO-based data (https://doi.org/10.15489/qkud8cudg596) can be accessed soon via DLR’s EOC Geoservice and the EO-based information as well as the soil properties and their uncertainties are available via a dedicated website of ISRIC. Both platforms provide functionalities for data visualization (via OGC-WMS) as well as data download (CC BY 4.0 license). References: Arias-Navarro, C., Baritz, R. and Jones, A. editor(s), 2024. The state of soils in Europe. Publications Office of the European Union. https://data.europa.eu/doi/10.2760/7007291, JRC137600. BGR [Bundesanstalt für Geowissenschaften und Rohstoffe] (2005). Soil Regions Map of the European Union and Adjacent Countries 1:5,000,000 (Version 2.0). Special Publication, Ispra. EU catalogue number S.P.I.05.134. HYDROSHEDS, 2024: https://www.hydrosheds.org/hydroatlas, access 11/2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Estimating soil properties and nutrient concentrations using machine learning and hyperspectral data: a case study in Italy

Authors: Micol Rossini, Dr. Luigi Vignali, Chiara Ferrè, Dr. Cinzia Panigada, Giulia Tagliabue, Dr. Gabriele Candiani, Dr. Francesco Nutini, Dr. Monica Pepe, Michael Marshall, Mariana Belgiu, Dr. Mirco Boschetti
Affiliations: Department of Earth and Environmental Sciences (DISAT), University of Milano Bicocca, Institute for Electromagnetic Sensing of the Environment (IREA), National Research Council of Italy (CNR), Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente
This study explores the use of machine learning algorithms and hyperspectral data to estimate key soil parameters, essential for monitoring soil fertility and promoting sustainable agricultural practices, as part of the European Space Agency (ESA) EO4NUTRI project. Spectral data were collected both in the laboratory and from the PRISMA satellite, focusing on the agricultural region of Jolanda di Savoia (FE), Italy. The primary goal is to develop accurate predictive models for soil chemical and physical properties, as well as the contents of macro, meso, and micronutrients (including total nitrogen, exchangeable bases, available sulfur, iron, phosphorus, and zinc), with spectral reflectance as the primary predictor. Field campaigns were conducted between March 2023 and May 2024, collecting soil samples from three crops—corn, rice, and wheat. A total of 200 soil samples were taken from the 0-20 cm depth (corresponding to the Ap horizon) and analysed for nutrient concentrations. Laboratory spectral measurements were conducted under controlled conditions with an SR-3500 spectroradiometer (Spectral Evolution, USA), covering the 350 nm to 2500 nm range (VNIR-SWIR). These measurements were standardized using the measurement protocol proposed by Ben Dor et al. (2015) to minimize instrumental variation. In addition to laboratory data, PRISMA satellite images were used, with the image date closest to each field sampling campaign selected to ensure temporal alignment between spectral data and soil parameters. Principal Component Analysis (PCA) was applied for dimensionality reduction, testing different numbers of principal components (5, 10, and 15) to optimize model performance. Various machine learning regression algorithms (MLRAs) were then trained and compared to estimate soil parameters, including Gaussian Processes Regression (GPR), Partial Least Squares Regression (PLSR), Least Squares Linear Regression (LSLR), Random Forest (RF), Kernel Ridge Regression (KRR), and Support Vector Regression (SVR). Model performance was evaluated using leave-one-out (L-O-O) and k-fold (k=5) cross-validation to assess generalization and prevent overfitting. Models based on laboratory spectral data showed higher accuracy compared to those based on PRISMA data, highlighting the challenges posed by environmental variability in satellite measurements. However, PRISMA-based models still provided valuable insights for estimating parameters such as total nitrogen exchangeable magnesium and calcium, and available zinc. For laboratory data, total nitrogen, exchangeable calcium and available iron and zinc were accurately estimated, with an R2 between predicted and measured values of 0.94, 0.86, 0.94 and 0.84, respectively. For PRISMA data, the best results were observed for total nitrogen, exchangeable magnesium and calcium, and available zinc, with R² of 0.79, 0.790.7 and 0.83, respectively. These findings demonstrate the potential of machine learning and hyperspectral data to estimate soil parameters. Future research will focus on improving satellite-based model accuracy by refining pre-processing techniques to minimize noise and atmospheric interference in PRISMA data. Incorporating additional environmental variables, such as moisture conditions and soil texture, will further improve the predictive capabilities of the models. This study contributes to the growing field of digital soil science, offering new opportunities for sustainable soil monitoring and management. Satellite-derived soil parameter maps, such as those from PRISMA, can support site-specific crop management, optimize fertilizer usage, and provide valuable insights into soil health on a large scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Optimising Satellite-based Soil Spectra Extraction for Predicting Agricultural Soil Carbon Content Across Europe

Authors: Denise Hick, Guy Ziv, Pippa Chapman
Affiliations: University of Leeds
There is a growing need for reliable monitoring, reporting and verification (MRV) systems to quantify and track soil organic carbon (SOC) content change in agricultural land both at the local scale and over large scales. Accurate SOC content estimates can be obtained via direct soil sampling and SOC measurement in the laboratory, but traditional soil surveys are expensive and time-consuming. Predicting SOC content based on remotely-sensed bare soil reflectance is gaining popularity, but the accuracy of predictions remains challenging, especially over large scales characterised by variable climate and soil types. In part, this challenge could be due to choices image selection, pixel-level filtering and multi-temporal aggregation accounting for residual vegetation, crop residue cover and soil moisture. Using 309 arable points across Europe surveyed as part of LUCAS 2018 to identify bare soil locations and ground-truth satellite imagery, this study investigated how different vegetation indices thresholds and bare soil compositing techniques affect the spectral similarity of Sentinel-2 soil reflectance to laboratory soil spectra, and picked the best combination of methods to predict SOC content at the European scale. Upon investigation of the LUCAS 2015 soil spectral library, we found that the Normalized Burn Ratio (NBR2) of European bare soil, often used to mask out crop residue covered and high soil moisture pixels, varies based on soil type. Therefore, a new soil-type specific (dynamic) NBR2 threshold was proposed and tested in this study. Our results showed that the bare soil satellite reflectance filtered using the dynamic NBR2 threshold proposed here and aggregated by the 90th highest quantile resulted in a closer match to laboratory soil spectra. This methodology was therefore applied onto Sentinel-2 imagery from 2017 to 2020 to obtain “optimal” bare soil reflectance, which was decomposed into principal components and, together with additional spectral, climatic, terrain and pedological covariates, used to predict SOC content over agricultural points at the European scale (R2 = 0.42  0.03, RMSE = 5.67  0.29 g kg-1, MAPE = 30.39  1.23 %). This returned comparable results to other European-scale studies, despite adopting a more restrictive approach and using open-source data only. Lastly, total annual precipitation, the slope between visible satellite bands, and latitude were found to be the most important covariates for SOC content prediction at the European scale, followed by satellite brightness indices and soil textural information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating different methods for the estimation of bare soil surface reflectance using multispectral satellite image time series and LUCAS 2015 Multispectral Reflectance Data

Authors: Vasileios Tsironis, Ms Eleni Sofikiti, Dr Konstantinos Karantzalos
Affiliations: National Technical University Of Athens
Soil degradation is an increasingly pressing issue that necessitates action, prompting the European Union and United Nations to implement policies for its continuous monitoring. In this context, to support climate change monitoring, policymaking, and the adoption of agricultural practices that comply with these policies, there is an increasing need for up-to-date, comprehensive soil data. In the context of soil monitoring, Remote Sensing can provide valuable insights for monitoring soil over long periods and large areas at a low cost. Key challenges for soil monitoring through remote sensing techniques include distinguishing soil from vegetation, especially crop residue, and minimizing the influence of soil moisture and surface roughness on the reflectance of bare soil. Several satellite spectral indices are used up to now to distinguish bare soil but few studies are dedicated to evaluating and comparing all the different approaches for the adequate estimation of bare soil reflectance from satellite imagery. The goal of this work is to thoroughly evaluate the accurate bare soil reflectance mapping in Greece at medium spatial resolution by benchmarking the performance of various compositing approaches, providing a thorough assessment of the contribution of different techniques, i.e., simultaneous use of multiple spectral indices, different compositing, masking or thresholding techniques and other parameters of the time series i.e. cloud cover and time range. Focusing on Greece, a Mediterranean country with diverse microclimates and soil types, the study leverages Landsat 8 images spanning from 2015 to 2020 and the LUCAS 2015 database to evaluate the results for creating accurate bare soil reflectance composites. Each image was masked with the quality band of L8 and all land cover classes except for grassland, Cropland and Bare / sparse vegetation were excluded based on auxiliary Land Cover datasets available on GEE. A wide range of experiments were conducted to determine the best approach to create a bare soil reflectance composite that has the highest correlation possible per spectral band with the spectral reference data of 2015. This study evaluates the use of Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio 2 (NBR2), Bare Soil index (BSI), Soil Surface Moisture Index (S2WI) spectral indices, used individually or simultaneously, and with different combinations in order to test their ability to distinguish bare soil pixels from vegetation, crop residue and minimizing the effect of soil moisture. Different thresholds for the indices were tested and different compositing methods i.e., mean, median, min NDVI, min NBR2, min S2WI, max BSI. We also examined what is the most appropriate maximum cloud cover to use for the timeseries and whether setting a minimum number of bare soil instances for the pixels included in the composite improves the results. Additionally simple thresholding, i.e. setting a single threshold value for the entire area of interest, and dynamic thresholding, i.e. different threshold for each pixel, were tested. Finally, all approaches were tested for an 1-year or a 6-year long time series. Results indicated that by increasing bare soil frequency, through low-frequency pixel elimination, yielded substantially better correlation with reference data. The increase of maximum image cloud coverage, to include more input images to the composite, did not improve the composite’s coverage. Additionally, the simultaneous combination of all spectral indices is proven necessary for achieving high correlation levels, while choosing a good combination of multiple indices for the selection of bare soil pixels tends to be more important than using longer timeseries. Dynamic thresholding achieved high levels of correlation only with the 6-years composite, possibly due to very low bare soil frequency for the 1-year composite, while BSI and S2WI were proven to be the more impactful indices when used in a dynamic thresholding setting. When applying the dynamic BSI masking the correlation on the infrared part of the spectrum was improved. The best compositing methods in terms of correlation with the reference data were the mean and median ones, with Pearson’s correlation coefficient ranging from 0.75-0.85, across the different spectral bands. Τhese composites had also minimum salt and pepper noise which is a known issue. In terms of error, max BSI achieved the lower RMSE and most of the experiments achieved ubRMSE around 0.03 for the RGB bands and 0.05 for the infrared bands, indicating the residual effect of crop residue and soil moisture. By using NDVI, NBR2 and BSI simultaneously the quality of the composite increases rapidly even for one year of data. By further increasing the span to 6 years, the composite correlates even better in the VNIR part of the spectrum, but less improvement is observed in the SWIR part. This work benchmarked the performance of several compositing methodologies and provided a comprehensive evaluation of the contribution of various techniques in order to accurately map bare soil reflectance in Greece at medium spatial resolution. We demonstrated that careful selection and tuning of a wide range of parameters, may greatly enhance the estimation of bare soil reflectance from multispectral satellite image timeseries. The findings provide a solid foundation for improving bare soil reflectance estimate techniques and give useful directions for upcoming monitoring initiatives meant to support sustainable soil management and combating soil degradation globally.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The EDAFOS Project: A GIS Tool Solution To Combat Desertification

Authors: Christos Theocharidis, Maria Prodromou, Panagiota Venetsanou, Charalampos Panagiotou, Athos Agapiou, Diofantos Hadjimitsis
Affiliations: ERATOSTHENES Centre Of Excellence, ATLANTIS Environment and Innovation Ltd, Cyprus University of Technology
Desertification and land degradation are noteworthy hazards to environmental sustainability and human well-being worldwide, especially in vulnerable Mediterranean region, including Cyprus. Numerous efforts have been made, such as ratifying the Convention to Combat Desertification. However, countries like Cyprus still need assistance to assess the risk of desertification and identify the natural and anthropogenic pressures that contribute to it. The EDAFOS project, funded by the European Space Agency (ESA), is a collaborative effort involving ATLANTIS Environment LTD, the Department of Environment, and the ERATOSTHENES Centre of Excellence. The area of interest for the project was Cyprus, with the Limassol district selected as the pilot study area. A variety of open satellite and geospatial data were sourced from ESA and various governmental services such as the Department Of Meteorology (DOM), Department of Land and Surveys (DLS) and the Cyprus Agricultural Payments Organisation (CAPO). This project aimed to address the challenge of desertification by utilising advanced geospatial technologies to design a system that simplifies desertification risk mapping and the different individual parameters contributing to it. Moreover, through this the EDAFOS toolbox has been developed within the ArcGIS Pro software environment where the users can automatically integrate various datasets to create the Environmentally Sensitive Area Index (ESAI). This index evaluates the susceptibility of areas to desertification by integrating geospatial analysis, remote sensing, in-situ data, and data processing techniques to deliver crucial information for land degradation dynamics. Combining parameters such as the Vegetation Quality Index (VQI), Soil Quality Index (SQI), Climate Quality Index (CQI) and Management Quality Index (MQI), the EDAFOS project allows stakeholders to monitor, report, and develop countermeasures for combating desertification. Furthermore, the project provides scenario analysis to minimise risk and mitigate associated socioeconomic and environmental impacts to assist policymakers. Overall, it represents a significant advancement in environmental analysis, offering a powerful tool for identifying and mitigating land degradation risks at regional and global scales and supporting national competent authorities in implementing the desertification directive. ACKNOWLEDGEMENTS The EDAFOS project funded by the European Space Agency in the framework of ESA AO/1-10264/20/NL/SC, which is Tender for the Fourth Call for Outline Proposals under the Plan for European Cooperating States (PECS) in Cyprus. The authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project (www.excelsior2020.eu). The 'EXCELSIOR' project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 857510, from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development and the Cyprus University of Technology.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Methods and applications for soil organic carbon mapping based on Sentinel-2 bare soil composites

Authors: Dries De Bièvre, Prof. Bas van Wesemael, Pierre Defourny
Affiliations: Earth and Life Institute - UCLouvain
In recent years, soil health has gained increasing attention due to its important role in agricultural sustainability. The EU has now proposed a soil monitoring law, emphasizing the need for tools to monitor soil organic carbon (SOC) for soil health and understand its spatial variability. Since the reflectance spectrum of a soil is influenced by its content of organic matter, optical remote sensing can be a tool for mapping or monitoring SOC content. SOC maps at high resolution would allow to compare groups of fields with different management practices but in comparable contexts. Another possibility is that the derived maps serve to establish regional baselines to compare to soil analysis in individual fields. Observations of Sentinel-2 provide reflectance measurements in 10 spectral bands from the visible to the short-wave infrared part of the spectrum. We processed Sentinel-2 images to obtain bare soil reflectance values for pixels where at least 1 bare soil observation is available. A soil database of farmers’ routine analyses was then used to train prediction models for SOC content based on bare soil composites derived from Sentinel-2 images over 3 years. Reflectance values in the composite were averaged at field level to match the support of the soil samples. Despite the low variability of SOC contents in the Walloon region, this approach allows for predictions at parcel-level with a RMSE of 2.7 g C/kg. The performance is however variable. In the Loam Belt, characterized by a large surface of croplands but small variability in SOC content, the RMSE is 2.6 g C/kg while in more heterogeneous areas with less croplands the RMSE is up to 5.5 g C/kg. With the use of quantile regression approaches, the uncertainty on the estimates is accurately quantified. The predictive power of the normalized difference of all pairwise combinations of Sentinel-2 bands was evaluated. This allowed to select 4 spectral features that are associated with SOC content. Results indicated that spectral data alone cannot capture small-scale SOC variation. Incorporating three environmental covariates as predictors significantly improved model performance. The model can map SOC content with an accuracy comparable to existing soil maps, but at a higher spatial resolution. Interpretation of the model using Shapley values and surrogate models provided insights into the associations between predictors and SOC. Using this model, SOC content predictions are obtained at the level of individual fields. Since the variability between fields in the Walloon region is small, the obtained map has limited use for comparisons at parcel-level. Also for monitoring SOC changes in croplands over time, the obtained accuracy is not high enough, since SOC content could change with only 1 g C/kg in the timeframe of 10-20 years when including cover crops. Therefore, Sentinel-2 derived SOC maps alone do not suffice for precise SOC monitoring at parcel-level. While the model's accuracy is insufficient for monitoring small SOC changes at the parcel level, it demonstrates potential for regional assessments. We propose a methodology to estimate regional SOC content averages and variability, incorporating geostatistical simulations to quantify uncertainties. This approach supports the estimation of regional SOC baselines.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C-Band SAR Amplitude Time Series in Dryland Landscapes Reveal Grain Size Change Distribution after Flash Floods and Debris Flows

Authors: Albert Cabré, Dr Dominique Remy, Dr Odin Marc, Prof Dr Aaron Bufe, Dr Sebastien Carretier
Affiliations: Department of Earth and Environmental Sciences Ludwig Maximilian University of Munich, Géosciences Environnement Toulouse (GET), UMR 5563, CNRS/IRD/CNES/UPS, Observatoire Midi-Pyrénées
Understanding sediment fluxes and geomorphic changes in arid environments is essential for improving landscape evolution models. This research utilizes Synthetic Aperture Radar (SAR) imagery acquired since October 2014 in the Atacama Desert. Sentinel-1 SAR C-band images from the European Union’s Copernicus program have enabled us to investigate grain size variability and sediment transport processes across ephemeral channels and alluvial fans in the Atacama Desert—a region uniquely suited for such studies due to its extreme aridity and the lack of erosional interference between runoff events. Our analysis focuses on 21 alluvial fans and over than 50 kms of valley floors along a latitudinal gradient (21–24°S), where sedimentation from debris flows and hyperconcentrated flows (defined by varying water-to-sediment ratios) remains undisturbed between rainstorm events, which in some of the studied regions can be separated by more than 30 years. By integrating SAR amplitude data with on-site grain size measurements derived from field photographic analysis, we obtain strong correlations (R² = 0.72 for the 50th percentile and R² = 0.93 for the 84th percentile). These findings highlight the capacity of SAR amplitude imagery to reconstruct historical grain size distributions, leveraging the Sentinel-1 archive to establish a temporal dataset spanning nearly a decade of surface change related to grain size variations. This study also classifies non-permanent (transient) SAR amplitude variations associated with moisture and shallow groundwater during hydrological events observed in ephemeral channels and valley floors. By exploring sediment dynamics at varying altitudinal gradients, we gained critical insights into regional sediment pathways and their influence on geomorphic processes. The Atacama Desert’s unique hydrological history, including events in March 2015, May 2017, January 2020, and March 2022, provided an ideal setting to test and refine our methodologies. SAR amplitude imaging, combined with digital elevation models (DEMs), facilitates the extraction of erosion and deposition patterns along ephemeral drainages that lack direct monitoring. This approach enhances predictions of hydrological impacts from extreme runoff events and supports sediment transport modeling in under-monitored arid landscapes. Our findings contribute to global efforts to understand sediment fluxes in desert regions, addressing critical gaps in observational data. and demonstrates the versatility of C-band SAR and its potential for integration with other radar wavelengths in such landscapes that cover 40% of the continental lands. Our research lays promising foundations for advancing dryland erosion studies in other deserts of the world.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.05.07 - POSTER -Sea level change from global to coastal scales and causes

Sea level changes at global and regional scales have been routinely measured by high-precision satellite altimetry for more than three decades, leading to a broad variety of climate-related applications. Recently, reprocessed altimetry data in the world coastal zones have also provided novel information on decadal sea level variations close to the coast, complementing the existing tide gauge network. Since the early 2010s, the ESA Climate Change Initiative programme has played a major role in improving the altimetry-based sea level data sets at all spatial scales, while also supporting sea level related-cross ECVs (Essential Climate Variables) projects dedicated to assess the closure of the sea level budget at global and regional scales. Despite major progress, several knowledge gaps remain including for example:
• Why is the global sea level budget not closed since around 2017?
• Why is the regional sea level budget not closed in some oceanic regions?
• How can altimetry-based coastal sea level products be further improved?
• How can we enhance the spatial coverage of these products, which are currently limited to satellite tracks?
• To what extent do small-scale sea level processes impact sea level change in coastal areas?
• Can we provide realistic uncertainties on sea level products at all spatial scales?
• What is the exact timing of the emergence of anthropogenic forcing in observed sea level trends at regional and local scale?
In this session, we encourage submissions dedicated to improving multi-mission altimetry products and associated uncertainties, as well as assessing sea level budget closure at all spatio-temporal scales. Submissions providing new insights on processes acting on sea level at different spatial and temporal scales are also welcome. In addition to using altimetry data, other space-based and in-situ data, as well as modelling studies, are highly encouraged to submit to this session.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of deep-ocean warming based on sea-level and energy budget

Authors: Hyeonsoo Cha, Jae-Hong Moon, Taekyun Kim, Y. Tony Song
Affiliations: Jeju National University, NASA Jet Propulsion Laboratory
Advances in satellite altimetry and in-situ observations allow the quantification of thermal expansion and ocean mass changes contributing to sea-level rise. Observation-based estimates have shown that the global mean sea-level (GMSL) budget is closed within the uncertainties of each component. However, the GMSL budget has not been closed since 2016. Recent studies have suggested instrumental problems, such as salinity drift in ARGO and measurement and wet troposphere correction in Jason-3 satellite, which can contribute to this discrepancy. Our analysis shows that although correcting these problems has reduced the discrepancy, it still remains a non-closure of sea-level budget. This non-closure may be driven by deep ocean warming. However, estimating thermosteric changes in the deep ocean is challenging due to a lack of observation. Therefore, we quantify the deep ocean (below 2000m) contribution by using residual approach for thermosteric sea-level and earth energy imbalance. Results of budget analysis shows that ocean warming below 2000m is accelerating since 2016 compared to the previous decades. We will discuss these results in more detail at the conference.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Uncertainty quantification of sea level altimetry data in the coastal ocean

Authors: Fernando Niño, Léna Tolu, Fabien Léger, Florence Birol, Mathilde Cancet, Pierre Prandi
Affiliations: CNRS - LEGOS, Université de Toulouse, Collecte Localisation Satellites
Recent developments in coastal altimetry processing have made available new datasets of coastal sea level at a distance of only a few kilometers from the shoreline. Here we want to optimize the new geophysical information this represents by quantifying the associated uncertainties. Uncertainty quantification in altimetry is a very difficult matter; it must take into account systematic and random errors from different kinds of sources : theory, measurements and models. Prandi et al. [Local sea level trends, accelerations and uncertainties over 1993–2019. Scientific Data 8, 1 (2021)] provided a framework for sea level uncertainty for multi-mission gridded data (1°x1°) at the global level. We present the extension of this work to high-resolution along-track data (with a spatial resolution of ca. 300 m) near the coast, over the period January 2002 – December 2019, for the Jason-1/2/3 missions. We also present a new error budget analysis of the satellite altimetry system that takes into account the individual contributions of each altimetric correction (dry and wet tropospheric corrections, ionospheric correction, ocean tides, sea state bias, etc), and characterize the errors in each one of them as either biases, drifts or noise. We account for the time correlation in errors and estimate at each location the temporal variance-covariance matrix of the uncertainty in local sea level using all these contributions. The resulting variance-covariance matrices, are used to estimate the uncertainty metrics associated to the local sea level changes (e.g. uncertainty in local sea level or in local sea level trends) by using an extended least squares estimator. We thus estimated confidence intervals on sea level trends for 1149 portions of tracks distributed globally near the coast . For the characterization of the uncertainty in each altimetry correction we apply two different methods: (1) when we have several estimates to a given correction (e.g. from several tide models), we approximate the error of this particular correction as the standard deviation of the differences in these estimates, and (2) we can estimate the standard deviation of the difference with neighboring points. Because we use high-resolution altimetry data, not all points are always available and time-series can present gaps. For adequate processing, we fill these missing values with several strategies, and analyze the impact of them on the calculated uncertainties. Finally, we also present an error budget of the coastal sea level anomaly as a whole and the uncertainty contribution of each correction at the global scale, showing particular examples. Understanding the contribution of each source of error will finally help to reduce uncertainties.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sea level variations at the world coastlines over the past two decades from reprocessed satellite altimetry

Authors: Lancelot Leclercq, Anny Cazenave, Fabien Léger, Florence Birol, Fernando Nino, Jean-François Legeais, Dr Sarah Connors
Affiliations: Université de Toulouse, LEGOS (CNES/CNRS/IRD/UT3), CLS, ESA
In the context of the ESA Climate Change Initiative Sea Level project, we performed a complete reprocessing of high resolution (20 Hz, i.e., 350m) along-track altimetry data of the Jason-1, Jason-2 and Jason-3 missions over January 2002 to June 2021 in the world coastal zones. This reprocessing provides along-track sea level time series and associated trends from the coast to 50 km offshore over the study period. We call ‘virtual coastal stations’ the closest along-track point to the coast. This creates a new network of 1160 virtual sites well distributed along the world coastlines. We performed Empirical Orthogonal Decomposition analyses of the sea level time series at the virtual stations, globally and regionally, in order to: (1) identify the main drivers of the coastal sea level variability at interannual time scale, and (2) assess the along-coast coherence of the sea level response to the dominant drivers. The results highlight those coastlines where the EOF first mode reveals a dominant long-term coastal sea level rise They also help in identifying other regions where the coastal sea level is dominated interannual variations, highly correlated to natural climate modes. This analysis allows us to clearly separate portions of the world coastlines displaying different sea level behaviors. In regions where no tide gauge data are available (a large portion of the southern hemisphere), our results provide new information on present day sea level changes at the coast, hopefully useful for coastal adaptation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improvements in Estimating Mean Sea Level Trends and Acceleration from Global to Regional Scales

Authors: Anna Mangilli, Pierre Prandi, Victor Quet, Sylvie Labroue, Gerald Dibarboure, Sarah Connors
Affiliations: CLS, CNES, ESA ECSAT
The accurate measurement of the mean sea level (MSL) and the precise estimate of the MSL trend and acceleration, at global and regional scales, are key goals of high precision satellite altimetry. These estimates are crucial to tackle important scientific questions including the closure of the sea level budget and the assessment of the Earth Energy Imbalance (EEI) in the context of climate change. Great efforts have been put in the last decade to better characterise and understand the uncertainties associated to the MSL measurements from radar altimetry, leading to the design of an error variance-covariance matrix describing the temporal correlations of the MSL uncertainty at global (Ablain et al 2009, Ablain et al 2019, Guerou et al 2023) and at local (Prandi et al., 2021) scales. Precisely quantifying the observational sea level uncertainties is important because uncertainties inform on the reliability of sea level observations and prevent from misinterpretations of artifacts arising from the limitations of the observing system. Following these efforts, the 28-year MSL trend and acceleration uncertainties (at 90%CL) are now down to ±0.3mm/yr ([5%-95%] CL) and ±0.05 mm/yr2 ([5%-95%] CL) at global scales (Guerou et al 2023) and, in average, to ±0.83 mm/yr ([5%-95%] CL) and ±0.062 mm/yr2 ([5%-95%] CL), respectively, at local scales (Prandi et al. 2021). Yet, further improvements, at both global and regional scales, are still required to address three main scientific questions 1) the closure of the sea level budget, 2) the detection, and attribution of the signal in sea level that is forced by greenhouse gases emissions (GHG) and 3) the estimate of the current EEI (Meyssignac et al 2023). Meeting such requirements need further improvements on the accuracy and precision of satellite altimetry data, on the error description and on the data analysis. In this talk we will focus on the statistical analysis, showing that a significant improvement in the estimate of the MSL trend and acceleration uncertainties at global scales, of the order of ~15% and ~20%, respectively, can be gained from an optimal General Least Square estimator, or a Bayesian analysis which optimally include the covariance matrix in the likelihood function. In particular, we will present the updated optimal analysis (with the GLS and the Bayesian approach) of the MSL timeseries at global scales from the recently released L2P DT 24 products, discussing the impact on the estimation of the MSL trend and acceleration. The talk will then focus on how these methods can be applied to the MSL analysis at regional scales, showing an improvement of the estimation of the MSL trend and acceleration at local scales in the order of 5%, according to the current description of the error covariance at regional scales. Finally, we will discuss the perspectives, highlighting the benefits of the Bayesian approach for future MSL analysis and the improvements of the MSL error description at local scales when adding the spatial covariance to the error budget. The results presented are obtained within the ESA cci_SL project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Open-Ocean Contribution to Sea-Level Variations over the Norwegian Continental Shelf

Authors: Fabio Mangini, Dr. Antonio Bonaduce, Dr. Léon Chafik, Dr. Roshin Pappukutty Raj
Affiliations: Nansen Environmental And Remote Sensing Center, Department of Meteorology, Stockholm University
At the Living Planet Symposium, we would like to present the results of an ongoing project which investigates the impact of density fluctuations in the North-East Atlantic Ocean on sea-level variations over the Norwegian continental shelf. The project has two main objectives: increasing our understanding of ocean dynamics, and offering insights into the reliability of climate models. The presentation will mostly focus on the first objective, as the analysis of climate models is still under development. The project aims to identify the patterns of density variations in the open ocean that most significantly correlate to sea-level variations over the Norwegian continental shelf. A preliminary result suggests a role for the upper ocean. Specifically, density variations over the upper 200m of the North-East Atlantic show a statistically significant correlation with the sea-level variability over the Norwegian shelf both on intra-annual and inter-annual timescales (after the contribution of local winds has been removed). This result aligns with the existing literature which links density variations over the eastern margin of the North Atlantic Ocean to northern European sea-level variations. However, compared to the existing literature, we provide additional information on the depth range over which density variations most affect the Norwegian sea-level variability. Furthermore, when compared to previous works, our finding is based on more recent observational datasets. Indeed, together with the Norwegian tide gauges and hydrographic stations, we use the ALES-reprocessed coastal satellite altimetry dataset to estimate Norwegian sea-level variations, and the GRACE and GRACE-FO satellite gravimetry missions, to estimate Norwegian mass component of sea-level variations. The project also uses two different products of ocean temperature and salinity to determine whether different spatial resolutions can impact the results. Specifically, the analysis is performed using EN4, which has a spatial resolution of 1°x1°, and ARMOR3D, which has a spatial resolution of 0.25°x0.25°. Both datasets return comparable results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: How is the global and regional sea level budget closed from the latest observations?

Authors: Marie Bouih, Anne Barnoud, Robin Fraudeau, Ramiro Ferrari, Michaël Ablain, Julia Pfeffer, Dr Anny Cazenave, Benoît Meyssignac, Alejandro Blazquez, Sébastien Fourest, Hugo Lecomte, Lancelot Leclercq, Martin Horwath, Thorben Döhne, Jonathan Bamber, Anrijs Abele, Dr. Antonio Bonaduce, Raj Roshin, Stéphanie Leroux, Nicolas Kolodziejczyk, William Llovel, Giorgio Spada, Andrea Storto, Chunxue Yang
Affiliations: Magellium, 2LEGOS, Université de Toulouse, CNES, CNRS, UPS, IRD, TUD Dresden University of Technology, University of Bristol, NERSC, DATLAS, UBO-LOPS, CNRS/LOPS, UNIBO, CNR-ISMAR
The closure of the Sea Level Budget (SLB) is a key challenge for modern physical oceanography. First, it is essential that we ensure the proper identification and quantification of each significant contributor to sea level change through this closure. Second, it provides an efficient means to closely monitor and cross-validate the performance of intricate global observation systems, such as the satellite altimetry constellation, satellite gravimetry missions (GRACE/GRACE-FO), and the Argo in-situ network. Third, this closure reveals to be a beneficial approach for assessing how well the observed climate variables, such as sea level, barystatic sea level, temperature and salinity, land ice melt, and changes in land water storage, comply with conservation laws, in particular those related to mass and energy. In this presentation, we will discuss the state of knowledge of global mean and regional sea level budget with up-to-date observations, encompassing 1) an up-to-date assessment of the budget components and residuals, along with their corresponding uncertainties, spanning from 1993 to 2023 in global mean and throughout the GRACE and Argo era for spatial variations; 2) the identification of the periods and areas where the budget is not closed, i.e. where the residuals are significant; 3) advancements in the analysis and understanding of the spatial patterns of the budget residuals. A focus will be made on the North Atlantic Ocean where the residuals are significantly high. We investigate the potential errors causing non-closure in each of the components (e.g., in situ data sampling for the thermosteric component, geocenter correction in the gravimetric data processing) as well as potential inconsistencies in their processing that may impact large-scale patterns (e.g., centre of reference and atmosphere corrections). Errors linked to the system observability (due to different sampling and resolution of the various observations) will be quantified with synthetic data extracted from ocean simulations. This work is performed within the framework of the Sea Level Budget Closure Climate Change Initiative (SLBC_cci+) programme of the European Space Agency (https://climate.esa.int/en/projects/sea-level-budget-closure/). This project was initiated by the International Space Science Institute Workshop on Integrative Study of Sea Level Budget (https://www.issibern.ch/workshops/sealevelbudget/).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Explaining the Global Sea Level Budget Since 1992 From Altimetry, GRACE and Independent Dataset and Models

Authors: Carsten Ludwigsen, Ole Andersen
Affiliations: DTU Space
Global sea level change is a clear indicator of climate change, and any misalignment in the observational system, such as sea level budget misclosure, raises concerns about either incomplete understanding or observational errors (e.g., salinity drift in ARGO floats or wet path delay in Jason-3 radiometers). Addressing these discrepancies is essential for accurately assessing sea level rise and its impacts. Building on Ludwigsen et al. (2024), which validated GRACE data using independent land surface mass change estimates, this study extends the analysis to the full 32-year satellite altimetry record. We investigate the transient (seasonal to decadal) and long-term drivers and acceleration of global sea level change. Our results show strong agreement between ocean mass reconstructions and GRACE/GRACE-Follow On (GRACE/FO) data until 2020. However, post-2020 discrepancies emerge, with reconstructions indicating a more pronounced increase in ocean mass than observed by GRACE/FO. This divergence is attributed to underestimation of precipitation over Western Africa in ERA5 reanalysis data, which impacts hydrological models and terrestrial water storage estimates. These findings affirm the globally observed ocean mass changes derived from GRACE-FO. Discrepancies between GRACE data and steric-corrected altimetry before 2017 are amplified by salinity drift in the ARGO floats and wet path delay errors in Jason-3. However, some residual misclosures remain unexplained and warrant further investigation. This study demonstrates the value of integrating GRACE land mass data, steric-corrected altimetry, and independent reconstructions to identify gaps in current monitoring systems. Addressing these gaps is critical for improving the accuracy of sea level budgets, enhancing our understanding of regional variability, and resolving the drivers of accelerating sea level trends observed over the past three decades.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: 20-Year-Long Sea Level Changes Along The World’s Coastlines From Satellite Altimetry: The New ESA CCI Dataset Of Coastal Virtual Stations

Authors: Jean-François Legeais, Dr Anny Cazenave, Lancelot Leclercq, Fabien Léger, Dr Florence Birol, Fernando Niño, Dr Marcello Passaro, PhD Sarah Connors
Affiliations: CLS, CNRS-LGOS-CTOH, Technical University of Munich, ESA/ECSAT
In the context of the ESA Climate Change Initiative (CCI) Coastal Sea Level project, a complete reprocessing (including retracking of the radar waveforms) of high resolutions (20 Hz, i.e. 350 m) along-track altimetry data of the Jason-1, Jason-2 and Jason-3 missions since January 2002 was performed along the world coastal zones. The latest release (v2.4) of this SL_cci coastal altimeter sea level dataset covers the period January 2002 to June 2021 and is now available for the users (https://doi.org/10.17882/74354). A new improved processing of the waveform retracking and computation of the coastal sea level anomalies was developed and a new editing procedure for the coastal sea level trend computation was implemented. We now obtain a dataset of more than 1100 coastal virtual stations (i.e., the location of the first valid point from the coast along the satellite track) at an average distance from the coast of about 3 km, including more than 200 stations at less than 2 km from the coast. These coastal sea level anomalies and trends of the altimetry-based virtual stations have been validated with tide gauges data where possible. The project also focuses on the estimate of improved sea level uncertainties at regional and local scale. This dataset provides valuable information where there are no other sea level measurements and allows to fill gaps in historical time series of some nearby tide gauges. This dataset can be used to analyse the coastal sea level variability (we show example in the Mississippi river delta) and to determine the dominant forcing factors of this variability both at local scale and along the world coastlines. Future versions of these coastal virtual stations are planned with extended temporal coverage (with Sentinel 6A-MF), improved altimeter processing and characterization of the associated uncertainties. We also prepare the production of a dataset of sea level time series relative to the ground motion to better answer the needs of coastal adaptation policies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Multiplatform Approach to Explore Sentinel-6 LRM and SAR Measurements at Different Temporal and Spatial Scales

Authors: Mathilde Cancet, Florence Birol, Claude Estournel, Fabien Léger, Rosemary Morrow
Affiliations: CNRS-LEGOS, Université de Toulouse
Monitoring and predicting the sea level changes are critical issues for coastal populations and ecosystems, and the 30-year record of satellite radar altimetry observations is an outstanding tool to contribute to the understanding of ocean processes and changes. From 1992 (Topex/Poseidon) to 2020 (Jason-3), the reference satellite nadir altimetry missions used to estimate the sea level changes were operated with conventional altimeters in Low Resolution Mode (LRM), i.e. measuring sea surface height estimates averaged within a circular footprint of about 10 to 15 km of diameter. This altimetry technique is known to encounter issues in coastal environments, due to land contamination in the radar signal. The SAR (Synthetic Aperture Radar) or Doppler-delay altimetry technique, which has been first operated on CryoSat-2 since 2010 and then on Sentinel-3 since 2016, provides sea surface height measurements with much higher resolution along the track (about 300 m), and still about 15 km across the track. With this SAR technique, the along-track noise is almost divided by two, compared to the LRM technique, and this approach enables new insights for meso-scale dynamics, coastal circulation and ocean changes. Sentinel-6, the current reference mission that took over Jason-3 on the reference orbit in 2020, provides the unique opportunity to directly compare both modes, as it is operated in interleaved LRM and SAR modes. Building a long time series of reference missions poses the question of continuity between LRM and SAR modes, as the SAR technique may provide different information than LRM, depending on the temporal and spatial scales. Having sea surface height estimates measured in both modes almost simultaneously is also an opportunity to better understand the content of LRM observations in past missions and possibly better sort the noise and the signals in the 30-year archive. In this study, we explore Sentinel-6 LRM and SAR sea surface heights measurements in the North-Western Mediterranean Sea, at different temporal (from one pass to seasonal) and spatial scales. We take advantage of the wealth of multiplatform data available in the region, such as in situ observations, model simulations and the new SWOT altimetry mission, which provides 2D sea surface height images, to analyze specific events and better understand the physical content and the limitations of the SAR and LRM nadir altimetry observations, in order to build a consistent long-term record in coastal regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact of Using FES2022b Tidal Model for Climate Scales

Authors: Loren Carrere, Perrine Abjean, Florent Lyard, Gerald Dibarboure
Affiliations: CLS, LEGOS/CNRS, CNES
The accuracy of altimeter measurements has been much improved for the last 30 years thanks to improvements in the instrumental, environmental and geophysical parameters leading to an unprecedent accuracy of the data. In particular, a new global tidal model FES2022b has been produced (https://www.aviso.altimetry.fr/en/data/products/auxiliary-products/global-tide-fes.html ) taking advantage of longer and more accurate altimeter time-series, new missions, an improved global bathymetry and a refined mesh. These accurate data and models allow investigating applications such as climate variability with more accuracy. The present analysis focuses on the impact of the recent global tidal models on climates scales. Historically, some mean sea level (MSL) residual error can be visible on 58.77 days period which correspond to the S2 semi-diurnal tide aliased frequency by the Topex/Jason altimeters and to instrument's errors detected on the altimeter (beta-prime variability). Recent insights in tidal models have also detected the existence of long-term tendencies in the amplitude of some of the main tidal wave (Ray 2024). The impact of the tidal model on the global and regional trends of the mean sea level and on the long-term variability like annual and semi-annual signals has been analyzed here. Some comparisons are made using three different models FES2022b, FES2014 and GOT4.10. The study focuses on the reference missions which are generally used to estimate the global mean sea level trends and accelerations (cf https://www.aviso.altimetry.fr/en/data/products/ocean-indicators-products/mean-sea-level.html ): Topex, Jason-1, Jason-2, Jason-3 and Sentinel-6MF. Results indicate a weak impact of the model choice on both the global and stronger on the regional MSL trends. Moreover, FES2022b and FES2014 tidal models consistently reduce the residual long-term variability of the ocean at annual and semi-annual periods. As already stated in some other studies (Zawadzki et al. 2016), the tidal model has an impact on the residual signal on the 58.77 days period. FES2014 and FES2022b allow a low and consistent 58.77 error between T/P, Jason-1, Jason-2, Jason-3 and Sentinel-6MF global MSL. FES2022b reduces the 58.77-day errors in Topex/Poseidon global MSL but raises it slightly in the other missions studied. Locally, 58.77-day errors are equivalent between FES2022b and FES2014.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Understanding uncertainties in the satellite altimeter measurement of coastal sea level: insights from a round robin analysis.

Authors: Florence Birol, François Bignalet-Cazalet, Mathilde Cancet, Jean-Alexis Daguze, Wassim Fkaier, Ergane Fouchet, Fabien Léger, Claire Maraldi, Fernando Niño, Marie-Isabelle Pujol, Ngan Tran
Affiliations: CNRS - LEGOS, Université de Toulouse, Collecte Locallisation Satellites, Noveltis, Centre National d'Etudes Spatiales (CNES), Noveltis
The satellite radar altimetry record of sea level has now surpassed 30 years in length. These observations have greatly improved our knowledge of the open ocean and are now an essential component of many operational marine systems and climate studies. But use of altimetry close to the coast remains a challenge from both a technical and scientific point of view. Here, we take advantage of the recent availability of many new algorithms developed for altimetry sea level computation to quantify and analyze the uncertainties associated with the choice of algorithms when approaching the coast. To achieve this objective, we did a round robin analysis of radar altimetry data, testing a total of 21 solutions for waveform retracking, correcting sea surface heights and finally deriving sea level variations. Uncertainties associated with each of the components used to calculate the altimeter sea surface heights are estimated by measuring the dispersion of sea level values obtained using the various algorithms considered in the round robin for this component. We intercompare these uncertainty estimates and analyze how they evolve when we go from the open ocean to the coast. At regional scale, complementary analyses are performed through comparisons to independent tide gauge observations. The results show that tidal corrections and mean sea surface can be significant contributors to sea level data uncertainties in many coastal regions. However, improving quality and robustness of the retracking algorithm used to derive both the range and the sea state bias correction, is today the main factor to bring accurate altimetry sea level data closer to the shore than ever before. Full details of this work can be found in the article "Understanding uncertainties in coastal sea level altimetry data: insights from a round robin analysis" (Birol et al., Ocean Science, 2024 - https://doi.org/10.5194/egusphere-2024-2449, 2024).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.08.03 - POSTER - Ocean Salinity

Ocean salinity is a key variable within the Earth’s water cycle and a key driver of ocean dynamics. Sea surface salinity (SSS) has been identified as Essential Climate Variable by the Global Climate Observing System (GCOS) and Essential Ocean Variable by the Global Ocean Observing System (GOOS). Through the advent of new observing technologies for salinity and the efforts to synthesize salinity measurements with other observations and numerical models, salinity science and applications have significantly advanced over recent years.
This Session will foster scientific exchanges and collaborations in the broad community involved in ocean salinity science and applications, widely encompassing satellite salinity (eg, SMOS and SMAP) data assessment and evolution, multi-mission merged product generation (eg, CCI-salinity), exploitation of in-situ assets for calibration and validation and related Platforms (eg, Salinity PI-MEP) and ultimately broad salinity-driven oceanographic/climatic applications and process studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Observing Mesoscale Ocean Surface Salinity From Space: A New Instrument Concept

Authors: Shannon Brown, Alan Tanner, Sidharth Misra, Peter Gaube, Severine Fournier, Alex Akins
Affiliations: Jet Propulsion Laboratory, University of Washington
The advent of spaceborne L-band (1.2-1.4 GHz) passive and active microwave systems has unequivocally demonstrated the ability to measure salinity from low Earth orbit. We have learned a lot about how to measure salinity from space and know well the limitations of the current generation of systems. A community paper, Vinogradova et al. (2019), outlines a roadmap for observational resolution/accuracy and technology development needed for the coming decade and makes the case for both observational continuity and enhancement. In this presentation, we focus on activities that aim to enhance salinity remote sensing from space. There are two areas of improvement in mission design that are needed. One is observing at several bands between 1.4 GHz and 10 GHz to simultaneously resolve sea surface salinity, temperature and ocean roughness (wind). This addresses the retrieval part of the problem, minimizing the error from needing ancillary wind and temperature information. The other improvement is increasing the spatial resolution and the radiometric precision to resolve sub-mesoscale ocean features and extend the measurement into the polar oceans and near coasts. The science community recommends missions to observe sea surface salinity with an accuracy of < 0.2 PSU at 10km spatial resolution on time scales < 3 days. However, no current technology exists that meets the requirements for enhancement. In this presentation, we will highlight a new mission concept for a future ocean spaceborne observatory. This sensor is designed to measure salinity at 0.2psu resolution without any temporal averaging, as is currently done with SMAP and SMOS. It provides 0.2psu precision at <10km spatial resolution for a single snapshot. We will discuss the expected capability in terms of measurement accuracy, precision and spatial resolution for simultaneous salinity, temperature and wind retrieval and highlight the technology that will get us there.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing the Understanding of Salinity Dynamics in the Baltic Sea Through Integrated Satellite, In Situ, and Numerical Modeling Approaches

Authors: Rafael Catany, Dr Andreas Lehmann, Dr Lara Schmittmann, Dr Hela Mehrtens, Professor Miroslaw Darecki, Dr Anna Bulczak, Dr Jaromir Jakacki, Dr Maciej Muzyka, Dr Daniel Rak, Dr Dawid Dybowski, Professor Lidia Dzierzbicka-Glowacka, A Nowicki, Dr Joanna Ston-Egiert, Dr M Ostrowska, PD Dr-Ing. habil Luciana Fenoglio, Dr Jiaming Chen, Dr Artu Ellman, Dr Nicole Delpeche-Ellmann, Quentin Jutard, Marine Bretagnon, Phillippe Bryère, Dr Laurent Bertino, Dr Raphaël Sauzède, Natascha Mohammadi, Giovanni Corato, Dr Roberto Sabia
Affiliations: Albavalor, GEOMAR, IOPAN, Bonn University, TalTech, ACRI-ST, NERSC, LOV, AdwaisEO, ESA ESRIN
The Baltic Sea is a semi-enclosed shelf sea with distinct geographical and oceanographic fea-tures. One of the Baltic's most notable characteristics is its horizontal surface salinity gradient, which decreases horizontally from the saline North Sea to the near-fresh Bothnian Sea in the north and the Gulf of Finland in the east. Additionally, a vertical gradient and strong stratification separate less saline surface water and deep saline water. These salinity features are mainly driv-en by river runoff, net precipitation, wind conditions, and geographic factors that lead to restricted and irregular saltwater inflow into the Baltic and limited mixing. The overall positive freshwater balance causes the Baltic to be much fresher compared to fully marine ocean waters, with a mean salinity of only about 7 g/kg. The Baltic Sea is particularly sensitive to climate change and global warming due to its small volume and limited exchange with the world oceans. Consequently, it is changing more rapidly than other regions. Recent changes in salinity are less clear due to high variability, but overall surface salinity decreases with a simultaneous increase in the deeper water layers. Furthermore, the overall salinity distribution is indirectly linked to the general circulation of the Baltic Sea, which consists of cyclonic circulation cells that comprise the main basins. Thus, im-proving the understanding of the salinity dynamics leads to a better understanding of the circula-tion in the Baltic Sea. The project 4DBALTDYN (May 2024 to May 2026) will build upon and enhance the previous Baltic+ Salinity SSS (Sea Surface Salinity, 2011-2019) dataset. By integrating highly spatially resolved SMOS satellite SSS data with in situ observational data and numerical modelling, this project aims to improve our understanding of the Baltic Sea's salinity dynamics. The SMOS SSS data pro-vide continuous monitoring of the evolution of the surface salinity of the entire area of the Baltic Sea.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mechanisms of tropical sea surface salinity variations at seasonal timescales

Authors: Antoine Hochet, Soumaïa Tajouri, Nicolas Kolodziejczyk, William Llovel
Affiliations: University of Brest, Cnrs, IRD, Ifremer, UBO LOPS
Climate-coupled models typically overestimate the amplitude of the seasonal cycle of sea surface salinity (SSS) in the tropics. A better understanding of the mechanisms controlling the seasonal variance of SSS could provide directions for improving the representation of the SSS seasonal cycle amplitude in these models. In this work, we use a novel framework, based on seasonal Salinity Variance Budget (SVB), which we apply to the Estimating the Circulation and Climate of the Ocean (ECCO) state estimate, to study the mechanisms controlling the variance of seasonal SSS in the tropical Oceans. Our findings reveal that oceanic advection, vertical diffusion, and freshwater fluxes from rivers and precipitation all play an important role in controlling the amplitude of the seasonal cycle but their impact varies regionally. The SVB framework effectively distinguishes between ``source'' (mechanisms that enhance variance) and ``sinks'' (mechanisms that dampen variance). We show that vertical diffusion acts as the primary sink across most regions, except for the eastern Arabian Sea where precipitation dominates as the main sink. In other regions of the tropical Oceans, precipitation and river runoff act as sources of variance. The effect of the advective term on the SSS variance is shown to be mainly the sum of two terms. First, a term associated with the spatial redistribution of the variability by the eddy-parametrized oceanic circulation. Secondly, a term associated to a transfer of salinity variance between the time mean and seasonal circulations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards Physically Consistent Copernicus Imaging Microwave Radiometer Level 2 Products for the Global Ocean and Atmosphere

Authors: Robin Ekelund, Christophe Accadia
Affiliations: EUMETSAT
The Copernicus Imaging Microwave Radiometer (CIMR) is an upcoming European satellite mission in the framework of the Copernicus sentinel expansion programme, specifically designed to support the European integrated policy through augmented monitoring of global warming and Arctic amplification. To this end, CIMR will by design monitor several essential climate variables at the poles, land and global ocean through high spatial resolution and low-frequency passive microwave satellite measurements. Developed by the European Space Agency, the mission will consist of at least two satellites, with first launch planned in 2029 and second seven years later. CIMR will fly in a sun-synchronous orbit and observe using a 360 degrees conical scanning viewing geometry with a minimum of 1900 km swath width, allowing sub-daily no-hole coverage of the poles and dual observations at each location (forward and aft views). It will measure the full polarisation at L, C, X, K and Ka band (central frequencies at 1.4135, 6.925, 10.65, 18.7 and 36.5, respectively) with stringent requirements upon noise equivalent difference temperatures (NEdT). Footprint sizes range from 60 to 5 km, from the lowest to highest channel frequency. On-board hardware will be used to mitigate radio frequency interferences, an ever-increasing issue for satellite-borne microwave sensors. The European Organisation for the Exploitation of Meteorological Satellite (EUMETSAT) is tasked with the development, generation and distribution of L2 surface and atmospheric variables covering the global ocean. These products are the sea surface salinity (SSS), sea surface temperature (SST), ocean wind vector (OWV), total column water vapour (TCWV), liquid water path (LWP) and liquid precipitation (PCP). They are essential for understanding the influence of global warming and Arctic amplification upon the hydrological cycle, boundary layer interaction and ocean dynamics. The sensor concept is well and uniquely suited for the retrieval of SSS, SST and the OWV. L-band is a requirement for salinity remote sensing as sensitivity is limited to this frequency band. CIMR will in this respect provide continuity and improvement to current L-band missions, i.e. ESA SMOS and NASA SMAP. Detection of the full polarisation allows the retrieval of wind direction and correction of the Faraday rotation of radiation in the Ionosphere without auxiliary total electron content data. While CIMR lacks a dedicated water vapour channel, it will have sensitivity to TCWV, LWP and PCP, albeit at a reduced sensitivity compared to missions dedicated to the atmosphere such as the EUMETSAT Polar System - Second Generation (EPS-SG) Microwave Imager (MWI). Together, however, these variables form a comprehensive L2 product portfolio for boundary layer monitoring. The development of the global ocean and atmosphere L2 product portfolio is currently in the early phase at EUMETSAT. The selected retrieval algorithm is a physically based optimal estimation algorithm, which will ensure the physical consistency between the retrieved product variables. The products will be distributed on high, medium and low-resolution grids depending upon the channels that the variable in question is primarily reliant upon. The SST product will comply with the development and distribution standards set up by the Group for High Resolution Sea Surface Temperature (GHRSST) international science group. Validation activities consider cross-validation with the upcoming Metop-SGB satellite that will carry the Scatterometer (SCA) and MWI instruments for retrieval of winds, humidity and clouds. The CIMR orbit is synced with that of Metop-SGB to provide collocated observations within 10 minutes difference at the poles. Surface parameters will be validated using the networks of Argo profiling floaters and Towards fiducial Reference Measurements of Sea-Surface Temperature by European Drifters (TRUSTED).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New regional SSS fields developped at CATDS CEC-OS

Authors: Jacqueline Boutin, Dr Jean-Luc Vergely, Dr Gilles Reverdin, Dr Léa Olivier, Dr Stéphane Tarot
Affiliations: Cnrs/locean, ACRI-st, IFREMER
The Ocean Salinity Center of Expertise for CATDS (CATDS CEC-OS) works at improving methodologies to be implemented in the future in the near real time CATDS processing chain (CATDS-CPDC). The primary goal is to generate global level 3 SMOS SSS fields, the latest being CATDS CEC-LOCEAN Debiased V9. However, the methodology used to derive these global fields does not always fit very well with specific regional needs. This lead us to derive two specific regional fields: -High temporal resolution fields in rapidly variable areas, such as river plumes -A specific processing in the Arctic Ocean Global SSS products currently distributed by the CATDS CEC-OS are smoothed over 9 days or 18 days and sampled every 4 days. In highly variable regions, like in river plumes, variations in surface salinity are expected on much smaller time scales. Knowledge of these regional variations is of interest for studying processes related to the fate of freshwater and associated biogeochemistry [e.g. Olivier et al., 2024]. Hence, we have developed a new temporal interpolation scheme that intends to keep as much as possible high temporal frequency SSS variability (over typically one day or two). Temporal resolution of satellite SSS products is limited by revisit times. The combination of ascending and descending SMOS passes allow revisit times of the order of 1.5 days. The addition of SMAP data makes it possible to consolidate this information. These fields are provided over 8 regions from 2010 to 2021. Over the Arctic Ocean, the methodology originally derived by Supply et al. [2020] and implemented in SMOS ARCTIC SSS V1 maps has been revisited. 9-day and 18-day maps are provided over the June 2010 to August 2023 period.a temporal optimal interpolation with a bias removal depending on the SMOS observation geometry (see a general description in [Boutin et al., 2018]) has been added, Comparisons with independent in situ datasets, conducted in CEC LOCEAN and in PIMEP, indicate a clear improvement (reduction of std difference by ~a factor 2 and systematic increase of r2); with V2.0 r2 is greater than 0.8 with 40% of the data sets considered at PIMEP. As in version 1, SSS maps are provided on an Equal-Scalable Earth Grid 2 (EASE 2) with a Northern Hemisphere Azimuthal projection and a resolution of 25km. These results lead to the development of a new operational chain for deriving a SMOS SSS Arctic product operationally. Reference datasets : Boutin J. and Vergely J.-L. (2024). SMOS ARCTIC SSS L3 V2 maps produced by CATDS CEC LOCEAN. SEANOE. doi:10.17882/98769 Boutin J., Vergely J.L, Olivier L., Reverdin G., Perrot X., Thouvenin-Masson C. (2022). SMOS SMAP High Resolution SSS maps in regions of high variability, generated by CATDS CEC. SEANOE. https://doi.org/10.17882/90082. Boutin J., Vergely J.-L., Khvorostyanov D. (2024). SMOS SSS L3 maps generated by CATDS CEC LOCEAN. debias V9.0. SEANOE. https://doi.org/10.17882/52804#109630. Bibliography : Boutin, J., J. L. Vergely, S. Marchand, F. D'Amico, A. Hasson, N. Kolodziejczyk, N. Reul, G. Reverdin, and J. Vialard (2018), New SMOS Sea Surface Salinity with reduced systematic errors and improved variability, Remote Sensing of Environment, 214, 115-134, doi:https://doi.org/10.1016/j.rse.2018.05.022. Olivier, L. et al. (2024), Late summer northwestward Amazon plume pathway under the action of the North Brazil Current rings, Remote Sensing of Environment, 2024, vol. 307, doi: https://doi.org/10.1016/j.rse.2024.114165. Supply, A., J. Boutin, J.-L. Vergely, N. Kolodziejczyk, G. Reverdin, N. Reul, and A. Tarasenko (2020), New insights into SMOS sea surface salinity retrievals in the Arctic Ocean, Remote Sensing of Environment, 249, 112027, doi:https://doi.org/10.1016/j.rse.2020.112027.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: D.01.01 - POSTER - Collaborative Innovation: building a Digital Twin of the Earth System through Global and Local Partnerships

The concept of a Digital Twin of the Earth System holds immense potential for revolutionizing our understanding and management of our planet. However, building such a complex and comprehensive system requires a global effort. This session explores the power of collaborative innovation in bringing together diverse stakeholders to create a robust and impactful Digital Twin Earth.

In this session, we invite contributions to discuss the following key topics:

- International Collaborations and Global Initiatives
We seek to highlight major international collaborations, such as ESA's Digital Twin Earth and the European Commission's Destination Earth, which exemplify the collective effort needed to develop these advanced systems. Contributions are welcome from successful international projects that demonstrate the potential for global partnerships to significantly advance the development and application of the Digital Twin Earth.

- Public-Private Partnerships (Industry and Academia Collaborations)
We invite discussions on innovative models for funding and resource allocation within public-private partnerships, which are crucial for sustainable development and effective environmental monitoring. Contributions from tech companies and startups that have been instrumental in developing key technologies for the Digital Twin Earth are especially welcome, showcasing the private sector's vital role in this global initiative.

- Local and Community Engagement
Engaging local communities and fostering grassroots initiatives are essential for the success of the Digital Twin Earth. We invite contributions that discuss the role of citizen scientists in data collection, monitoring, and validation efforts. Examples of training and capacity-building programs that empower local communities and organizations to actively participate in and benefit from these advanced technologies are also sought. Additionally, we welcome examples of successful local collaborations that highlight the positive impact of digital twin technologies on environmental monitoring and resilience.

- Multi-Disciplinary Approaches
Addressing the complex challenges of developing a Digital Twin Earth requires a multi-disciplinary approach. We seek contributions that integrate diverse expertise from climate science, data science, urban planning, and public policy to create comprehensive digital twin models. Discussions on developing standards and protocols for interoperability and effective data sharing among stakeholders are critical for holistic problem-solving and are highly encouraged.

- Policy and Governance Frameworks
We invite contributions that explore policy and governance frameworks supporting the development of policies for sustainable development and climate action. Effective governance structures that facilitate collaboration across different levels of government, industry, and academia are crucial. Additionally, we seek discussions on addressing ethical, privacy, and regulatory considerations to ensure the responsible use of digital twin technologies.

By fostering international collaborations, leveraging public-private partnerships, engaging local communities, integrating diverse expertise, and developing robust policy frameworks, this session aims to collectively advance the development of the Digital Twin Earth. This holistic approach ensures that the Digital Twin Earth is not only a technological marvel but also a collaborative, inclusive, and impactful tool for sustainable development and environmental resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SNOWCOP - Unlocking the Full Potential of Copernicus Data and Infrastructure to Improve Meltwater Monitoring in the Andes

Authors: Carlo Marin, Riccardo Barella, Valentina Premier, Claudia Notarnicola, Alexander Jacob, James McPhee, Jaime Ortega, María Ignacia Orell, Paloma Palma, Cristóbal Sardá, Jeroen Dries, Patrick Henkel, Markus Lamm, Mariano Masiokas, Lucas Ruiz, Ezequiel Toum, Leandro Cara, Carolina Adler, Pierre Pitte, James Thornton
Affiliations: Eurac Research, University of Chile, Vito Remote Sensing, ANAVS, IANIGLA, MRI
The meltwater contribution from snow and ice in mountainous regions plays a critical role in sustaining life downstream, supporting potable water security, agriculture, industry, hydropower generation and mining, especially in the current regime of climate change. To effectively address this challenge, the Horizon Europe project SNOWCOP, started in October 2024, leverages the full potential of the European Union Copernicus data and infrastructure to provide novel snow water equivalent (SWE) and ice melting rate maps with high spatio-temporal resolution, suitable to monitor meltwater dynamics in complex mountainous terrains. Cutting-edge methods will be employed to extract valuable information about snow and glacier from satellite data, which will then be assimilated into a physically based model for snow and ice water equivalent estimation. The model will be run in a hindcast mode, generating reanalysis data spanning the past 20+ years at daily resolution. Notably, the proposed approach yields SWE maps over a 50-m pixel size, achieving an unprecedented level of spatial detail for the vast area of the extra-tropical Andes Cordillera. The Copernicus Data Space Ecosystem (CDSE) infrastructure and openEO specifications, which houses both processing facilities and a comprehensive data repository, serves as the backbone for this project. CDSE will be populated with all the necessary codes, in-situ data, and third-party space data, enabling the seamless extraction, processing, and analysis of mountain cryosphere-related information. SNOWCOP will leverage innovative snow stations powered by EGNSS technology to reinforce in-situ measurements of SWE and liquid water content (LWC). These stations will be positioned on locations that optimally represent the snowmelt dynamics within a specific catchment based on the project generated reanalysis data. To further enhance accessibility and utilization, a user-friendly and standardized API and a robust dissemination strategy will be implemented in collaboration with key public authorities in Europe and South America. This strategic approach aims to attract new users from both the scientific and commercial sectors, ensuring that the project’s valuable data and insights reach a broad audience. The primary results of snow modeling developments and SWE estimation method integration within the CDSE, along with initial community and policymaker engagement activities, will be presented at the conference. Focusing on the critical Andes Mountain range, where meltwater serves as a vital lifeline for millions of people but remains poorly monitored, this initiative leverages the expertise of the International Copernicus partners at the University of Chile. Through this collaboration, we tackle shared challenges faced by mountain regions globally, developing replicable solutions that unlock new opportunities for both local and European communities. *https://dataspace.copernicus.eu/ SNOWCOP has received funding from the European Union’s Horizon Research and Innovation Actions programme under Grant Agreement 10180133
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Collaboration around standardized benchmarks: Finding the common ground between Ocean and Data scientists

Authors: Quentin Febvre, Alexis Mouche, Antoine Grouazel, Julien Le Sommer, Clément Ubelmann, Ronan Fablet
Affiliations: Ifremer, IMT Atlantique, CNRS, Datlas
The development of digital twins of the Ocean system will require collaboration from different fields. Deep learning is an integral part of this landscape bringing both opportunities to be seized and challenges to be addressed. Indeed, developing deep learning methods to analyze ocean observation data requires methodological (ML) expertise. It also necessitates instrumental and geophysical knowledge. Furthermore in an evolving observing system with changing instruments, and a changing climate, correctly diagnosing and monitoring the outputs of neural networks is paramount.This is essential for maintaining the quality of downstream products and scientific outputs. We present here general principles and how to design standardized benchmarks which can serve as the foundation for collaborative development of deep learning solutions for ocean observation problems. Ocean scientists define scientific or operational objectives using data (observations, reanalysis, simulations …), and an evaluation framework. Data scientists develop data-driven solutions for the specified problem. Such collaboration relies on iteratively identifying the fail cases, and updating the evaluation cases, metrics and methods in order to guide further improvements. We present what concepts (data and code versioning, experiment tracking, workflow management) and python libraries (hydra, mlflow, dvc) can be used to implement such iterative and collaborative spaces. We detail the feedback acquired from two projects with different scopes : - A public data challenge on SSH Mapping from satellite altimeters. The project aims at providing easy data access, reproducible and extensible processing pipelines as well as automated evaluation workflow. This initiative demonstrates ways to facilitate the participation of outside research teams and individuals on a specific ocean observation challenge. - An internal evaluation bench for sea state parameter inversion from Synthetic Aperture Radar images. The goal of this work is to install and track and manage ML models as part of a production pipeline with evolving processing steps and validation test cases. This project showcases how to develop, maintain and iterate on deep learning algorithm for observation analysis
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Flood Simulation and Forecasting based on Earth Observation and AI for Sustainable Planning of Climate Change Adaptation

Authors: Mariana Damova, Dr Stanko Stankov, Dr Emil Stoyanov, Hermand Pessek, Hristo Hristov
Affiliations: Mozaika
We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It has be federated with the DestinE platform.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: DT-HEAT: A Digital Twin for Urban Heat Resilience

Authors: Iphigenia Keramitsoglou, Aphrodite Bouikidis, Aleš Urban, Jan Geletič, Klea Katsouyanni, Evangelia Samoli, Antonis Analitis, Alexandra Tragaki, Eleni Toli, Panagiota Koltsida, Nefta Votsi, Evangelos Gerasopoulos, Christos Zerefos, Stavros Solomos, Christos Spyrou, Sorin Cheval, Konstantinos Zervas, Evi Tzimpoula, Gaia Cipolletta, Chris Kiranoudis
Affiliations: National Observatory Of Athens, Czech University of Life Sciences in Prague, Institute of Computer Science of the Czech Academy of Sciences, Medical School, National Kapodistrian University of Athens, Harokopio University of Athens, Athena RC, Academy of Athens, National Meteorological Administration, New Metropolitan Attica, Serco
The intensifying impacts of heatwaves, particularly on public health and mortality, underscore the urgent need for innovative and collaborative solutions to enhance urban resilience. Climate change is amplifying the frequency and severity of these extreme events, posing unprecedented challenges to metropolitan areas and their most vulnerable populations. DT-HEAT, currently under development within the European Commission-funded CARMINE project and aligned with ESA's Destination Earth (DestinE) initiative, represents a cutting-edge response to these challenges. By leveraging digital twin technology, DT-HEAT provides a predictive and actionable platform to estimate heat-related mortality, support emergency response planning, and promote the integration of Nature-based Solutions (NbS) into urban landscapes. At its core, DT-HEAT combines high-resolution urban modeling, satellite data, real-time weather forecasts, mortality records, and socio-economic indicators to deliver comprehensive insights for both short-term and long-term planning. The tool is designed to empower stakeholders, from local/regional authorities to organizations serving and representing vulnerable populations, with the ability to anticipate heatwave impacts and implement targeted actions and interventions. Its dual focus—on immediate heatwave response and long-term urban resilience—ensures that cities are better equipped to protect their residents, minimize heat risk, and adapt to a changing climate. DT-HEAT exemplifies the power of European collaboration and the role of public-private partnerships in addressing complex environmental challenges. By bringing together technology providers, local governments, and academic institutions, the project has fostered a robust and user-centric ecosystem. Current developments in Athens Metropolitan Area and Prague showcase the adaptability of DT-HEAT across diverse urban contexts. In Athens, the tool is being tailored to address the city’s intense heatwaves, exacerbated by dense urban structures and limited green spaces. Prague’s deployment of DT-HEAT addresses both extreme heat and air quality in a historic, mixed-use urban environment. These case studies highlight the tool's flexibility and its potential for scaling to other cities, each with unique challenges and characteristics. *Technical Development and DestinE Integration* The technical foundation of DT-HEAT is now being transferred to ESA's DestinE platform, enabling it to leverage the platform’s advanced capabilities. A user-friendly dashboard interface will provide policymakers and stakeholders with real-time and predictive insights into heatwave characteristics and impacts. The tool is powered by data streams from weather forecast and climate simulations (from DestinE and ECMWF/Climate Data Store, as well as from local partners providing downscaled data), which support short-term and long-term impact assessments, respectively. Short-term planning is based on a data-driven approach. Historical datasets of daily environmental parameters, such as average and maximum temperature, along with attributable deaths due to heat, are used to train a deep learning model. This model predicts next-day mortality with high performance, that will allow city officials to implement targeted emergency measures. This short-term mortality estimation is crucial for immediate heatwave management, enabling cities to allocate resources efficiently and save lives. The long-term planning component of DT-HEAT focuses on assessing the cumulative impact of heatwaves on urban populations and informing strategies to enhance resilience and reduce mortality. By simulating different urban planning scenarios, such as implementing different NbS, the tool provides insights to policymakers as to which solution will have the higher positive impact. This dual capability of DT-HEAT—addressing both immediate and strategic needs—will support urban resilience. *Community Engagement and Localized Solutions* CARMINE project gives emphasis on stakeholder engagement by integrating Living Labs in all its case study areas, including Athens Metropolitan Area and Prague. These collaborative spaces bring together local stakeholders, including municipal officials, community leaders, research/academia and social service organisations, to contribute to digital solutions design and validation and co-create urban resilience strategies. Implementation of DT-HEAT on DestinE platform directly targets users who may not have extensive experience or familiarity with the data but are interested in gaining insights from it. *Recognition and Vision* DT-HEAT’s recognition as the "Most Promising Proposal" at the 2nd DestinE Innovation Challenge organized by ESA, underscores its transformative potential. The award highlights the project’s innovative approach to integrating digital twin technology with urban resilience planning. This recognition also provides a platform for further collaboration and expansion with new features opening doors to new partnerships and opportunities for scaling the tool to additional cities. The project’s vision extends beyond addressing immediate challenges. By fostering international collaboration, engaging local communities, and aligning with global policy goals, DT-HEAT aims to contribute to the broader objective of building a Digital Twin Earth. This ambitious initiative seeks to revolutionize how we understand and manage our planet, providing a comprehensive and inclusive tool for sustainable development. DT-HEAT represents a significant step forward in addressing the escalating impacts of heatwaves. By integrating advanced technology, fostering collaboration, and engaging communities, the tool offers a scalable and adaptable solution for urban resilience. Its deployment in Athens Metropolitan Area and Prague provides valuable insights into its potential, while its alignment with ESA’s DestinE platform ensures that it remains at the forefront of digital innovation. As cities worldwide face increasing heat-related challenges, DT-HEAT serves as a model for how collaborative, data-driven approaches can protect public health, enhance sustainability, and inspire global efforts to build resilient urban environments. Funded by the European Union (GA 101137851).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Leveraging Destination Earth capability for Assessing Physical Climate Risks to the European Central Bank

Authors: Delphine Deryng, Andrej Ceglar
Affiliations: European Centre for Medium-Range Weather Forecasts, European Central Bank
Central banks play a pivotal role in ensuring financial stability, yet they face significant challenges in addressing the growing physical risks posed by climate change. These risks, including extreme weather events and long-term climate shifts, can disrupt economic activity, impair asset values, and destabilize the financial system. Here we explore how the Destination Earth (DestinE) technology can be leveraged to assess physical climate risks with unprecedented precision and relevance for the European Central Bank. DestinE’s digital twin technology, such as high-resolution model simulation and on-demand scenario, allow for the simulation of localized climate events under future radiative forcing scenarios as well as interactive access to model outputs. By integrating DestinE into a climate risk framework tailored to central banks, exposure and vulnerability across sectors and regions impacting the finance sector can be quantified at unprecedented precision, aligning risk assessments with macroeconomic modelling and stress-testing needs. This presentation will introduce the general climate data needs from a central bank perspective, and discuss the main areas in which DestinE’s Climate DT can make an unprecedented contribution, in particular in the consideration of compound climate events and cascading impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Dynamic Spin on a Digital Twin: Integrating Real-Time Weather, Land-Cover and Land-Use Changes in Landslide Hazard Assessment

Authors: Margit Kurka, Manuela Hirschmugl, Christian Bauer, Herwig Proske, Janik Deutscher
Affiliations: University of Graz - Department of Geography and Regional Science, Joanneum Research GmbH
Millions of people in Europe and worldwide live in highly landslide-prone areas¹, being subjected to many fatalities² and high economic loss. Global economic loss due to landslides amounts to several hundred billion Euros every year² ³. Europe faces the highest economic loss worldwide, and within Europe Austria ranks high among the countries affected most by the consequences of landslides¹. Vast areas of Austria consist of mountainous and hilly terrain prone to gravitative mass movement, due to existing topographic and geological conditions. Current landslide susceptibility and hazard models primarily focus on static, causal parameters such as geology and morphology as predisposing factors of slope instability, often neglecting the temporal variability introduced by dynamic parameters, such as extreme weather events, land-cover changes, e. g. deforestation, and land-use adaptations. This highlights the necessity to integrate dynamic variables in the modelling process to improve real-time response to high-intensity or long-duration precipitation events as well as anthropogenic changes to land-cover and morphology. Landslides are highly dynamic processes, and their monitoring and prevention therefore require a dynamic approach. The herein presented study addresses this gap by developing a digital twin landslide susceptibility model, with focus on rainfall-triggered sliding and flowing modes in unconsolidated hillside materials, to duplicate not only the static conditions responsible for such landslides, but also dynamic processes involved in, causing and triggering them. The model area for the dynamic digital twin to be developed lies in the south of the state of Styria, Austria, where landslides are a frequent and widespread phenomenon, demonstrating that not only mountainous areas but also the less-studied foothills and foreland are often affected. Gravitative mass movements in this area are driven by unique geological conditions, consisting of unconsolidated interlayered sands, silts, clays, gravels and marls. In the project area sufficient data is available from previous projects, including landslide maps and inventories, geological, meteorological and land-cover data. Globally many regions experience increased rainfall intensity and duration due to climate changes, causing experts to increasingly voice concerns about heightened probability of and risks from landslides³. Under consideration that rainfall-triggered landslides cause the majority of landslide caused fatalities and high monetary losses² ⁴, these are relevant points to consider when evaluating landslide hazard, regardless if looked at globally, nationally or regionally. In 2009 and 2023 intense rainfalls triggered thousands of individual landslides within the project area. In case of the August 2023 event, which plays a key role in this study, the low-pressure system ‘Zacharias’ was responsible for the severe flooding and widespread landslide occurrences. The districts of Southeast Styria and Leibnitz were declared disaster zones, with over 3,000 landslide-related damage reports and an estimated loss exceeding 30 million Euros⁵. This serves as one example to demonstrate the increase of landslide occurrences due to more frequent long-lasting or high-intensity precipitation events, observed increasingly all over the country. It shows that Austrian stakeholders, be it on governmental or private, are faced with the challenge to predict, monitor and prevent risk associated with landslide events in an ever-changing climatic and environmental setting. A major limitation in landslide modeling and providing well-founded landslide susceptibility prediction for regional planning and infrastructure protection lies in the lack of detailed landslide inventories, particularly regarding the temporal occurrence of landslides⁶. The implementation of dynamic parameters, such as precipitation and land-cover changes, depends on this knowledge. Herein lies the advantage of the chosen project area, where such data is available. Intricate dynamic input parameters taken into account in the study are high-resolution meteorological data and land-cover data. Meteorological data is provided by the ECMWF's (European Centre for Medium-Range Weather Forecasts) high-resolution simulation models developed in the framework of DestinE (Destiny Earth), a flagship initiative of ESA and the European Commission to create a digital twin of the Earth to model, monitor and simulate natural phenomena, hazards and the related human activities. The Weather-Induced Extremes Digital Twin (Extremes DT), one of the first two implemented digital twins within DestinE, provides forecasts and simulations at 2-4 km resolution and on-demand even at 500m resolution⁷, and therefore is extremely valuable for our task. Additionally, in regard to land-use data, dynamic EO products, such as the Copernicus high resolution layers, national data sets from the Green Transformation Information Factory Austria (GTIF-AT), land use and land cover information, soil moisture, and forest structural parameters from airborne LiDAR will be included in developing the dynamic digital twin. The GTIF-AT for example provides forest disturbances at high spatial and temporal resolution to be included in the forecasting. Airborne LiDAR data provides insights into the vertical structure of forests, which in turn is relevant for the forests’ protective functionality. The utilization of synergies with the DestinE flagship initiative is an integral part of the project. The ultimate goal is the development of an automated prototype, which combines DestinE and local data in such a way that daily hazard maps can be provided, based on different scenarios, such as changes of landcover or landuse, increase of preventive measures, sudden extreme weather events. Calibration, modelling and validation will be performed on data available for the August 2023 event in the south of Styria, since it offers a unique opportunity: a large number of landslides were mapped in the field, and the event was one of the first on-demand high resolution use cases of the Extremes DT with data already available at ECMWF⁷. The project is carried out by a multi-disciplinary team with partners from research institutions (University of Graz, Joanneum Research GmbH) and a civil engineer SME based in the region (Lugitsch und Partner Ziviltechniker GmbH) thus bundling expertise in landslide susceptibility and hazard modelling, programming, geology, forestry, remote sensing, meteorology, engineering and construction. Stakeholders, such as the Austrian railway company (OEBB), are aware of the goals aimed for in the project and underline the necessity to move from static landslide susceptibility and hazard maps towards dynamic products, when it comes to prediction and advancing the functionality of early warning systems. Exemplarily for others, the influence of landslides on railway infrastructure points out the cross-regional and international relevance of the presented study. References ¹ Haque, U.; Blum, P.; Da Silva, P. F.; Andersen, P.; Pilz, J.; Chalov, S. R.; Malet, J.-P.; Auflič, M. J.; Andres, N.; Poyiadji, E.; Lamas, P. C.; Zhang, W.; Peshevski, I.; Pétursson, H. G.; Kurt, T.; Dobrev, N.; García-Davalillo, J. C.; Halkia, M.; Ferri, S.; Gaprindashvili, G.; Engström, J.; Keellings. 2016. D. Fatal landslides in Europe. Landslides. 13, pp. 1545–1554. ² Froude, M. J.; Petley, D. N. 2018. Global fatal landslide occurrence from 2004 to 2016. Nat. Hazards Earth Syst. Sci. 18, pp. 2161–2181. ³ Marín-Rodríguez, N. J.; Vega, J.; Zanabria, O. B.; González-Ruiz, J. D.; Botero, S. 2024. Towards an understanding of landslide risk assessment and its economic losses: a scientometric analysis. Landslides. 21, pp. 1865–1881. ⁴ Haque, U.; Da Silva, P. F.; Devoli, G.; Pilz, J.; Zhao, B.; Khaloua, A.; Wilopo, W.; Andersen, P.; Lu, P.; Lee, J.; Yamamoto, T.; Keellings, D.; Wu, J.-H.; Glass, G. E. 2019.The human cost of global warming: Deadly landslides and their triggers (1995–2014). Science of The Total Environment. 682, pp. 673–684. ⁵ Wind, H.-P.; Urbanitsch, A. 2023. Verbal Communication of extent of approximate landslide damages and cost by members of Land Steiermark. ⁶ Brenning, A. 2005. Spatial prediction models for landslide hazards: review, comparison and evaluation. Nat. Hazards Earth Syst. Sci. 5, pp. 853–862. ⁷ Gascón, E.; Sandu, I.; Vannière, B.; Magnusson, L.; Forbes, R.; Polichtchouk, I.; van Niekerk, A.; Sützl, B.; Maier-Gerber, M.; Diamantakis, M.; Bechtold, P.; Balsamo, G. 2023. Advances towards a better prediction of weather extremes in the Destination Earth initiative. EMS Annual Meeting Abstracts. 20. EMS2023-659.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: D.02.11 - POSTER - Super-resolution in Earth Observation: The AI change of paradigm

The design of the Sentinel-2 sensor with spatial resolutions of 10m, 20m and 60m for different spectral bands in the context of the actual resources offered by the methods of deep learning was a key turning point for the field of super-resolution. The spatial resolution is a characteristic of the imaging sensor, i.e. the bandwidth of the transfer function, super-resolution meaning to enlarge the range of spatial frequencies and the bandwidth of the transfer function. In the classical approaches this was treated mainly in two cases: i) as an ill-posed inverse problem, the solutions being constrained by strong hypotheses, very seldom fulfilled in actual practical cases, ii) based on physical model as pansharpening, the design of optical sensors with half pixel shift in the array or in the case of SAR by wave number tessellation or using information from side lobes of multistatic SAR. In reality super-resolution is a much broader area, it may refer also to the wavelength bandwidth for multi- or hyper-spectral sensors, the radiometric resolution, the characterization of single pixel cameras based on compressive sensing, the 3D estimation in SAR tomography, or an enhanced “information” resolution (e.g., instead of counting trees in very high resolution to estimate trees density form a low resolution observation), or enhance resolution of ocean wind estimation from SAR observations.

With the advent of deep learning, super-resolution entered in a new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low-resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypotheses but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The session invites submissions for any type of EO data and will address these new challenges for the Copernicus and Earth Explorer or related sensors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Hyperspectral Earth Observation for Sustainability: Enhancing EnMAP Data Spatial Resolution through Deep Neural Network Fusion with Sentinel-2 Imagery.

Authors: Pierre-Laurent Cristille, Jeronimo Bernard-Salas, Nick Cox, Emmanuel Bernhard, Antoine Mangin
Affiliations: ACRI-ST, CERGA, INCLASS Common Laboratory, Institut d’Astrophysique Spatiale (IAS)
The transition from Earth observation to actionable insights for climate and sustainability necessitates advancements in remote sensing technologies. Hyperspectral imaging has emerged as a cornerstone in this domain, offering unparalleled spectral richness across a wide range of wavelengths. Such data enable precise material characterization and monitoring of environmental changes. However, the relatively coarse spatial resolution of hyperspectral sensors, such as the Environmental Mapping and Analysis Program (EnMAP), limits their applicability in scenarios demanding fine spatial detail. This study proposes a novel image fusion framework, carried by a Deep Neural Network trained on fully synthetic data, that enhances the spatial resolution of EnMAP hyperspectral data by integrating it with high-resolution multispectral images from Sentinel-2. EnMAP is a cutting-edge spaceborne hyperspectral mission capturing 218 spectral bands across the 420–2450 nm wavelength range. While its spectral fidelity supports diverse applications in environmental and land-use studies, its 30-meter spatial resolution restricts detailed spatial analyses. In the other hand, Sentinel-2, part of the European Union’s Copernicus Program, offers 10-meter spatial resolution but with limited spectral coverage, including only four spectral bands at this resolution. The complementary nature of these datasets forms the basis for multi- and hyperspectral fusion, enabling the creation of products with both high spatial and spectral resolution. A critical challenge in supervised learning for image fusion is the lack of high-resolution hyperspectral ground truth data. To address this, we employed a Linear Mixing Model (LMM) to generate synthetic datasets. LMM is a widely used approach in remote sensing, effectively blending the spectral properties of EnMAP with the spatial details of Sentinel-2. Specifically, we synthesized 10-meter EnMAP-like images by spatially enhancing EnMAP spectral data with Sentinel-2 spatial features. The resulting mixture is constituted by abundance maps extracted from Sentinel-2 and representative spectra coming from the unmixing of an EnMAP image covering the same area. These images were then degraded back to 30 meters to mimic the original EnMAP resolution. Additionally, the corresponding 10-meter Sentinel-2 images were simulated by integrating the ground truth with the sensor spectral response functions (SRF), the other 20- and 60-meters bands were also simulated by degrading the 10-meter integrated product. This approach resulted in a representative dataset, covering all continents at every season, comprising simulated 30-meter EnMAP images, their 10-meter Sentinel-2 counterparts, and the synthetic 10-meter EnMAP ground truth. We designed a transformer-based autoencoder network to address the fusion challenge, leveraging its self-attention mechanism to model both local and global spectral-spatial dependencies. Transformers have revolutionized natural language processing and computer vision, and their application in hyperspectral image fusion represents a novel advancement. The network’s encoder processes input data—comprising low-resolution hyperspectral and high-resolution multispectral images—into compact feature representations. These features are then decoded to reconstruct a high-resolution hyperspectral image with enhanced spatial and spectral quality. The training process incorporated a custom loss function designed to balance spatial fidelity and spectral accuracy. Key components of the loss function included: 1. Spectral Reconstruction Loss: Ensures alignment of spectral signatures between the fused image and the high-resolution ground truth. 2. Spatial Sharpness Loss: Promotes spatial clarity by penalizing deviations from high-resolution details in the Sentinel-2 data. 3. Spectral Integrity Constraint: Regularizes the output to maintain consistency with EnMAP’s spectral profiles. Preliminary results demonstrate the efficacy of the proposed framework in enhancing EnMAP’s spatial resolution to 10 meters. The fused images exhibit significant improvement in spatial sharpness while maintaining spectral integrity. Quantitative metrics, including Root Mean Squared Error (RMSE), Peak Signal-to-Noise Ratio (PSNR) and Spectral Angle Mapper (SAM), confirm the superior performance and generalization capabilities of the model compared to baseline methods. Additionally, qualitative spectral analyses reveal the model’s ability to accurately capture fine spatial details and preserve the spectral signatures critical for remote sensing applications. This fusion framework has far-reaching implications for Earth observation and sustainability. By addressing the limitations of spatial resolution in hyperspectral imaging, the proposed method enables detailed environmental monitoring, improved land-use classification, and precise assessment of climate-related phenomena. Potential applications include: • Agriculture: Enhanced hyperspectral data can improve crop health monitoring, soil analysis, and precision farming practices. • Forestry: Fused images enable detailed assessments of forest density, species distribution, and deforestation patterns. • Water Resources: The improved resolution facilitates monitoring of water quality and aquatic ecosystems. • Urban Development: High-resolution hyperspectral data supports urban planning, infrastructure monitoring, and pollution analysis. Furthermore, the fusion process depicts the synergy between multispectral and hyperspectral remote sensing, paving the way for future missions integrating both technologies. The transformer-based architecture employed in this study also highlights the potential of deep learning to tackle complex challenges in Earth observation. The outcomes of this research contribute directly to climate action and sustainability goals by enhancing the usability of hyperspectral data. Improved spatial resolution enables detailed analysis of phenomena such as land-use changes, urban expansion, and ecosystem degradation. These insights can inform policy decisions, resource management strategies, and climate adaptation measures. By transforming EnMAP hyperspectral data into high-resolution products, this work supports actionable insights for monitoring environmental changes, assessing ecosystem health, and promoting sustainable development. The integration of advanced machine learning techniques with remote sensing underscores the role of interdisciplinary approaches in addressing global challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sentinel-2 Super-Resolution With Geolocation-Aware Generative Models

Authors: Ksenia Bittner
Affiliations: German Aerospace Center (DLR)
Recent advances in remote sensing technologies have provided a wealth of satellite imagery, enabling applications in areas such as urban planning, disaster management, environmental monitoring, and resource management. Among these applications, building segmentation is particularly critical, especially for rapidly urbanizing regions. High-resolution imagery is crucial for accurately mapping and analyzing structures; however, limitations in publicly available satellite imagery, such as Sentinel-2, pose challenges due to their relatively low spatial resolution (10–20 m per pixel). To overcome these limitations, super-resolution techniques have emerged as essential tools to enhance image quality and enable finer detail extraction from low resolution images. Generative Adversarial Networks (GANs) have been previously employed for super-resolution of remote sensing images, but their application has largely been confined to limited datasets, for example NAIP imagery, which covers only regions within the United States. Models trained on these datasets often fail to generalize effectively to global regions, producing suboptimal results when applied elsewhere. Moreover when attempting to upscale large areas using tiling techniques, noticeable patch artifacts often emerge, degrading the overall quality of the output. In recent years, the use of prior information through embeddings has gained popularity, most notably with the success of text embeddings in various tasks. Building on this trend, there has been a growing interest in leveraging geographic context via location embeddings. In this work, we introduce a novel approach that leverages location embedding to enhance super-resolution models. Additionally, we improve the GAN’s performance by incorporating techniques used diffusion models. We also conduct experiments to address common patching issues caused by tiling, drawing inspiration from recent advancements in seamless image synthesis. We can summarize our contributions as following: 1. We develop the first location-guided super-resolution model for remote sensing, designed to enhance generalization across diverse geographic regions by integrating location embeddings directly into the model. 2. We improve a GAN-based super-resolution model’s architecture by integrating attention mechanisms techniques to improve the scalability, and context understanding of the model. 3. We adapt a seamless image synthesis method to super-resolution, tackling common tiling artifacts problems by incorporating neighboring image data, ensuring the generation of continuous, high-resolution satellite imagery. 4. We showcase the transformative potential of the generated 1m resolution super-resolved Sentinel-2 imagery by successfully applying it to tasks such as building footprint extraction. Using the enhanced imagery, we directly infer binary building masks, achieving superior performance in downstream tasks compared to previously developed super-resolution methods. This demonstrates the significant advantages of our approach in delivering actionable, high-precision results for practical applications. Our results demonstrate the super-resolution of low-resolution satellite imagery, marking a key step forward in the application of publicly available dataset for global-scale remote sensing tasks. By enhancing the spatial resolution of Sentinel-2 imagery by a factor of 10, we unlock new possibilities for precise and scalable applications across a variety of remote sensing fields. This breakthrough not only maximizes the utility of existing satellite data but also reduces the dependency on launching new, high-cost sensors, promoting sustainability in Earth observation. Furthermore, our methodology establishes a robust foundation for future research and operational integration in geospatial analytics, highlighting the transformative potential of AI-driven approaches in addressing global challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Sharper Insights: Enhancing Agricultural and Environmental Monitoring with Sentinel-2 Super-Resolution

Authors: Andreas Walli, Dr. Michael Riffler
Affiliations: Geoville
Sentinel-2, with its multispectral imaging capabilities and high revisit frequency, has become a cornerstone for Earth observation across diverse applications such as agriculture, urban planning, forestry, and environmental monitoring. However, its native spatial resolution of 10–60 meters can limit its effectiveness in scenarios requiring finer spatial detail. Super-resolution techniques, which enhance the resolution of Sentinel-2 data using advanced algorithms like deep learning or data fusion, address this limitation by generating higher-detail imagery without the need for new satellite missions. This enhanced resolution unlocks new possibilities, enabling more accurate land cover and land-use classification, improved crop monitoring, detailed urban analysis, and better support for disaster response. The central concept of our approach involves utilizing high-quality ground truth data at a finer spatial resolution than Sentinel-2's native 10m resolution—preferably high-quality vector data that can be rasterized to any desired resolution. This ensures that the model can access detailed, precise reference data, essential for effective super-resolution learning. For the modelling, we use a U-Net model, a well-known convolutional neural network architecture designed for image segmentation, which is trained using the Sentinel-2 time-series data as input. The model computes the loss by comparing its predictions to the super-resolution ground truth layer, optimizing its performance through iterative updates. The U-Net's architecture is uniquely suited for this task due to its symmetrical design, where the encoding path captures contextual information, and the decoding path reconstructs finer details. By incorporating additional decoding layers relative to the encoding layers, the network is deliberately configured to upscale the resolution. This modification allows the model to recover and enhance spatial detail, effectively downscaling the resolution to a finer level within the model's design rather than through post-processing. Our results demonstrate that the model can capture and refine prominent geometric features, such as field boundaries, tree rows, and other structural elements within the landscape. The U-Net's ability to detect and preserve geometric patterns is leveraged, allowing it to reconstruct high-resolution details that align closely with the underlying ground truth. This approach enhances spatial resolution and maintains the integrity of critical landscape features, offering a robust solution for generating high-resolution imagery from lower-resolution satellite data. We have implemented this approach to detect sub-field agricultural practices in the Common Agriculture Policy (CAP) Area Monitoring Services for Austria and Wallonia and in the Copernicus Land Monitoring Services (CLMS) production of the Small Landscape Features (Small Woody Features) layers. The extracted super-resolution features are not just a technical achievement, but they are crucial for these applications, demonstrating the real-world impact of our work. By bridging the gap between high-resolution satellites and very-high-resolution alternatives, super-resolution expands the utility of Sentinel-2 data, making it a cost-effective solution for precision-driven applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Super-resolution in Earth Observation: The AI change of paradigm

Authors: Prof. Mihai Datcu, Prof. Andrei Anghel, lecturer Mihai Coca, Prof. Daniela Coltuc, lecturer Cosmin Danisor, Dr. Ing. Omid Ghozatlou, Ing. Vlad
Affiliations: POLITEHNICA Bucharest
Earth Observation (EO) images are records of the EM signature of the observed scene represented in a 4-dimensional (4D) space: space (geographic), wavelength or polarization, and time. Each EO sensor and mission has a number of parameters defining the scales and recording intervals in the 4D space, i.e. the spatial separation of finest image detail, the sensing spectral wavelength band and its bandwidth, and the time interval in-between the successive image acquisition and duration of the record. Super-resolution in the classical and most popular accept refers to the spatial representation, the limit of “distinguishability” of spatial details of the observed scene. In this presentation we discuss and define the “resolution” as a property of the EO sensor/instrument and overall mission, approaching the 3 aspects, spatial, spectral, and temporal resolution. These aspects are exemplified with recent results for the cases of Sentinel-1 and Sentinel-2. The presentation is also summarizing the main super-resolution methodologies, encompassing techniques from ill-posed inverse problem, exploitation of pixel shift, SAR wave number tessellation or using information from side lobs of multisatic SAR to PInSAR and TomoSAR. With the advent of deep learning, super-resolution entered in new era. The deep models with huge number of parameters, trained with big data sets opened a new alternative to the super-resolution: the data prediction applied to a low resolution sensor by training a model with high resolution data. The new paradigm does not anymore require strong hypothesis, but suffers from the black-box syndrome of deep learning. Thus, new methods are required as hybrid method using the sensor image formation models, derive consistency criteria for the physical parameters, verification of the cal/val criteria for the super-resolved products. The spatial resolution of optical sensor is the property of the imaging system, its spatial frequency bandwidth, the Modulation Transfer Function (MTF). The clasic example of super-resolution is to combine several low resolution images, slightly shifted, into a unique image, larger and with more details. If the observed scene does not change, the super-resolved image contains actual trustworthy information. In frequency domain, the super-resolution means the restoration of scene high frequencies. Thus, the super-resolution is an inverse problem. We need a forward model, which is the image fomation model, that is inversed by computation [Farsiu2004]. The model inversion is often an ill-conditioned problem. In order to restrein the solution space, one needs to add regularizations which are often independent of measured image and bring some prior knowledge about the image. The resulted super-resolved image is trustworthy only in the conditions all used hypotheses are true. Most recent methods address the super-resolution task through the lenses of various neural networks’ predictions, directly modeling the output as a high resolution image, conditioned on the input of the low resolution counterpart. The method proposed in [Lanaras2018] can be regarded as a precursor to prediction-based super-resolution. By using two different CNNs predict up-sampled versions of the 60m bands and 20m bands, respectively, from Sentinel-2 images. By constructing input-output training pairs using synthetical degradation, a networks is trained, using reconstruction losses such as ℓ1-norm. In [Vasilescu2023a] is proposed a multi-objective loss for controlling the consistency-synthesis characteristic of the final model, guiding the output high-frequency features towards the high-frequency spatial details from available high resolution bands. The prediction is evaluated based on Walds’ protocol [Wald1997], using the sensors’ MTF, to measure the reconstruction error on degraded inputs. In [Nguyen2021] and [Vasilescu2023b] is explicitly included an MTF-based operator as a final layer, with corresponding characteristics dependent on the spectral bands which were being up-sampled. The methods can be used for super-resolving images acquired by another sensor, too. The concept of spectral super-resolution is a technique used to enhance or recover multispectral or hyperspectral missing or low-resolution spectral bands. By combining both spectral and spatial super-resolution techniques, a more comprehensive and accurate hyperspectral image can be generated. This integrated approach allows for the recovery of both detailed spectral bands and improved spatial details, resulting in high-quality data that as in [Neagoe2023] where corrupted, missing or unobserved high resolution pixels are predicted using an U-net model. The temporal super-resolution plays a crucial role in earth observation by predicting and reconstructing missing or unobserved data signatures, such as cloud-covered areas. These techniques utilize both predictive and generative models to enhance the temporal resolution of satellite images. Predictive models are trained on existing datasets to forecast outcomes, such as identifying cloud-free regions based on historical patterns. Generative models, on the other hand, learn the underlying patterns or distributions of the data to generate new samples, similar to missing data. In the context of cloud removal, generative models like generative adversarial networks (GANs) are employed to synthesize high-quality, cloud-free images by learning from past observations. In the line of super-resolution obtained by increasing the system’s bandwidth (true super-resolution), bistatic SAR imaging systems with a stationary receiver and a spaceborne transmitter of opportunity (e.g., TerraSAR-X, ERS-2/ENVISAT, or GNSS) open the possibility to image the same area using data bursts belonging to multiple subswaths that correspond to different azimuth bandwidths [Anghel2019]. The available multiburst data can be used in various ways for target characterization by exploiting enhanced azimuth diversity. An essential benefit of using multiple apertures in spaceborne transmitter/stationary receiver bistatic SAR is the possibility to obtain an enhanced azimuth resolution, having access just to publicly available information regarding the data. Moreover, Sentinel 1 does not operate in spotlight mode and cannot provide very good azimuth resolution. In [Rosu2020] was introduced a methodology developed to increase azimuth resolution by exploiting multiaperture bistatic data acquired in a spaceborne transmitter–stationary receiver configuration. The procedure uses as input several continuous groups of range compressed pulses (from one or more bursts) and consists in the following steps: compensation of the antenna pattern, resampling in the slow time domain, and reconstruction of the missing azimuth samples between neighboring groups of pulses using an autoregressive model. The obtained multiaperture range image (with enhanced azimuth bandwidth) is focused on a 2-D grid using a back-projection algorithm. The approach was evaluated with real bistatic data acquired over an area of Bucharest city, Romania. Persistent scatterers SAR interferometry is the technique measuring from multitemporal observations the scene deformation with subwavelength accuracy. A parametric phase model is estimates persistent scatterers, points with stable electromagnetic proprieties. Usually the method is reaching the order of a few mm/year for deformation rates. The method is enhanced by separating pixels containing persistent scaterers from those severely affected by noise based on a statistical decision test. [Danisor2023] Further, the SAR Tomography starting from a multitemporal dataset, estimates the scene’s reflectivity profile in dimensions additional to the 2D focusing plane of the SAR images – like elevation and deformation velocity, enabling a more accurate study of the scene’s parameters. Besides the reflectivity profile estimation challenge, another aspect consists in the stable targets’ detection, the main feature of SAR Tomography allowing the detection of multiple scatterers within same resolution cell, leading to an 3D representation. References [Farsiu2004] Farsiu, S., et al. "Advances and challenges in super‐resolution." International Journal of Imaging Systems and Technology 14(2), 2004 [Lanaras2018] Lanaras, C., et al, Super-resolution of sentinel-2 images: Learning a globally applicable deep neural network. ISPRS JPRS, 146:305–319, 2018 [Nguyen2021] Nguyen, H. V., et al, Sentinel-2 sharpening using a single unsupervised convolutional neural network with mtf-based degradation model. IEEE JSTARS, vol. 14, pp. 6882–6896, 2021 [Vasilescu2023a] Vasilescu, V., Datcu, M., and Faur, D., A cnn-based sentinel-2 image super-resolution method using multiobjective training, in IEEE TGRS vol. 61, pp. 1–14, 2023 [Vasiklescu2023b] V. Vasilescu, M. Datcu and D. Faur, "Sentinel-2 60-m Band Super-Resolution Using Hybrid CNN-GPR Model," in IEEE GRSL, vol. 20, pp. 1-5, 2023 [Anghel2019] A. Anghel, R. Cacoveanu, A. -S. Moldovan, B. Rommen and M. Datcu, "COBIS: Opportunistic C-Band Bistatic SAR Differential Interferometry," in IEEE JSTARS, vol. 12, no. 10, pp. 3980-3998, 2019 [Rosu2023] F. Rosu, A. Anghel, R. Cacoveanu, B. Rommen and M. Datcu, "Multiaperture Focusing for Spaceborne Transmitter/Ground-Based Receiver Bistatic SAR," in IEEE JSTARS, vol. 13, pp. 5823-5832, 2020 [Wald1997] Wald, L., Ranchin, T., and Mangolini, M., “Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images”, PERS, 1997, 63 (6), pp. 691-699, 1997 O. Ghozatlou and M. Datcu, "Hybrid Gan and Spectral Angular Distance for Cloud Removal," 2021 IEEE IGARSS, Brussels, Belgium, pp. 2695-2698, 2021 [Dong2016] C. Dong, C. C. Loy, K. He and X. Tang, "Image Super-Resolution Using Deep Convolutional Networks," in IEEE PAMI, vol. 38, no. 2, pp. 295-307, 2016 [Hu2022] J. -F. Hu, et al, "Hyperspectral Image Super-Resolution via Deep Spatiospectral Attention Convolutional Neural Networks," in IEEE TNNLS, vol. 33, no. 12, pp. 7251-7265, 2022 [Neagoe2023] I. C. Neagoe, D. Faur, C. Vaduva and M. Datcu, "Band Reconstruction Using a Modified UNet for Sentinel-2 Images," in IEEE JSTARS, vol. 16, pp. 6739-6757, 2023 [Danisor2023] C. Dănişor, A. Pauciullo, D. Reale and G. Fornaro, "Detection of Distributed Scatterers in Multitemporal SAR Interferometry: A Comparison Between CAESAR and SqueeSAR Detectors," in IEEE TGRS, vol. 61, pp. 1-15, 2023
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Magnifying Change: A Deep Learning Approach for Multi-Sensor, Multi-Resolution Satellite Imagery

Authors: Maria Sdraka, Dimitrios Michail, Prof. Ioannis Papoutsis
Affiliations: Orion Lab, National Technical University Of Athens & National Observatory Of Athens, Harokopio University of Athens
Change detection is a widely applied technique in the field of geospatial analysis, supporting critical applications in environmental monitoring, disaster response and urban planning [1], [2], [3]. Its aim is to identify and analyse particular changes in the surface of the Earth over time, accounting for irrelevant variation in data such as atmospheric phenomena (e.g. fog, clouds, dust), sunlight incidence angle, vegetation growth, and geometric distortions [4]. On the other hand, super-resolution techniques target the enhancement of the ground sampling distance of images without loss of information or the insertion of artifacts [5]. While traditional change detection methodologies have proven effective, they often depend on imagery of consistent spatial and spectral resolutions, which is rarely available in practice due to inherent differences in satellite sensors and their outputs. This study introduces a novel deep learning approach specifically designed for change detection using multi-resolution satellite imagery from different sensors, addressing challenges such as resolution and bandwidth disparities, and high magnification factors. We make use of the open-source FLOGA dataset [6] for burn scar mapping, thus the proposed framework takes as input a high-resolution pre-event image from the Sentinel-2 satellites and a low-resolution post-event image from the MODIS satellites, with a significant magnification factor of up to x8. The output is a binary change map indicating areas of change versus no change. The novelty of this approach lies in its tailored architecture, capable of reconciling significant cross-resolution discrepancies between images acquired from different satellite platforms. Unlike existing methods that struggle with the integration of multi-sensor data due to resolution and sensor-specific variations, our approach incorporates specialised feature extraction pathways for high and low resolution data, and a cross-resolution alignment technique. The network is designed to learn spatial correspondences despite the high magnification factor, maintaining the integrity of high frequency details during feature alignment. This capability is crucial for applications requiring precise change detection where post-event imagery might be lower in resolution due to budgetary, temporal, or logistical constraints. Extensive experiments on the FLOGA dataset demonstrate that our approach outperforms traditional change detection models and specialised multi-resolution change detection approaches. In particular, we evaluated several approaches, such as multi-resolution input models, knowledge distillation, two-step pipelines comprising distinct super-resolution and change detection modules, as well as self-supervised pretraining. Our proposed model achieves higher accuracy in detecting underlying changes, particularly in scenarios with substantial differences in resolution and sensor characteristics. This performance underscores the model's ability to generalise across data types, setting a new benchmark in change detection research. We are hopeful that our model will facilitate faster and more reliable monitoring of critical alterations in land use, deforestation, urban sprawl, and post-disaster damage assessment, thus supporting timely decision-making and resource allocation. [1] Jiang, Wandong, et al. "Change detection of multisource remote sensing images: a review." International Journal of Digital Earth 17.1 (2024): 2398051. [2] Wang, Lukang, et al. "Advances and challenges in deep learning-based change detection for remote sensing images: A review through various learning paradigms." Remote Sensing 16.5 (2024): 804. [3] Cheng, Guangliang, et al. "Change detection methods for remote sensing in the last decade: A comprehensive review." Remote Sensing 16.13 (2024): 2355. [4] Khelifi, Lazhar, and Max Mignotte. "Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis." Ieee Access 8 (2020): 126385-126400. [5] Sdraka, Maria, et al. "Deep learning for downscaling remote sensing images: Fusion and super-resolution." IEEE Geoscience and Remote Sensing Magazine 10.3 (2022): 202-255. [6] Sdraka, Maria, et al. "FLOGA: A machine learning ready dataset, a benchmark and a novel deep learning model for burnt area mapping with Sentinel-2." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (2024).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Laveraging low resolution labels and noise-robust learning for very high resolution building mapping

Authors: Anis Amziane, Dr. Marco Chini, Dr. Yu Li, Dr. João Gabriel Vinholi, Dr. Patrick Matgen
Affiliations: Luxembourg Institute Of Science And Technology
Building mapping is a foundational task in remote sensing, underpinning a wide array of applications such as population estimation, urban planning, and disaster management. Accurate building footprint extraction provides critical insights into human activity, resource allocation, and risk assessment. For instance, precise maps of urban structures are vital for effective resource distribution, identifying vulnerabilities in infrastructure, and supporting sustainable development goals. The emergence of very-high-resolution (VHR) imagery, with resolutions of 1 meter or better, has significantly advanced the ability to capture and represent fine-grained details of Earth's surface, offering potential for building mapping tasks and enabling detailed spatial analysis. Despite the growing availability of high-quality VHR imagery, a persistent challenge limits its widespread application: the lack of large-scale pixel-wise annotated data. Annotating VHR images with accurate, pixel-level labels is a labor-intensive and resource-demanding process, requiring domain expertise, significant time, and computational resources. This challenge is further exacerbated when aiming for global-scale deployment, as producing labeled data for vast and diverse regions is often infeasible. Consequently, this bottleneck constrains the development of deep learning models that could fully leverage the potential of VHR data for building mapping. To address these limitations, this study explores the use of low-resolution (LR) labels, which are widely available globally, albeit at a coarser resolution of approximately 10 meters or lower. These LR labels, often derived from global land cover products or other satellite-based datasets, serve as a valuable yet underutilized resource. While their coarse granularity makes them insufficient for direct use in fine-grained mapping tasks, they provide a promising starting point for bridging the resolution gap between LR labels and VHR imagery. Harnessing these readily available labels for high-resolution tasks could unlock new possibilities in automated building mapping. In this paper, we propose a novel framework that leverages LR labels to infer accurate building footprints in VHR imagery without relying on pixel-wise annotations at high resolution. Our approach addresses the key challenges associated with this task through two main innovations: 1. Inferring Pseudo-High-Resolution Labels: Using LR labels as a base, we estimate pseudo-high-resolution (pseudo-VHR) labels that align with the granularity of VHR images. We design this inference process to bridge the resolution gap, effectively transforming coarse labels into fine-grained annotations suitable for VHR data. The pseudo-labels capture building footprints with improved detail, enabling their use in downstream learning tasks. 2. Noise-Robust Learning Strategies: Recognizing that the estimated pseudo-VHR labels may contain inherent noise and inaccuracies, we introduce robust learning techniques to refine these labels. Specifically, our framework incorporates noise-resistant loss functions and model training schemes to mitigate the effects of label noise. These strategies ensure that the learned building footprints maintain high accuracy, even when training data is noisy or imprecise. The proposed pipeline employs a cascading learning structure. First, we train two deep convolutional neural networks (CNNs) sequentially to predict pseudo-VHR labels using a specific label super-resolution loss function. The cascading design transfers the weights learned in the first model to the second, enhancing training efficiency and speeding up convergence. Finally, we train a third model to refine the predicted labels using a noise-robust loss function, incorporating information from both low-resolution and high-resolution optical imagery to further enhance accuracy. We evaluate the effectiveness of our framework on several publicly available VHR building mapping datasets, including the MIT Building dataset and the INRIA aerial image dataset. These datasets encompass a variety of urban landscapes, offering a diverse testing ground for assessing the generalizability of our approach. To benchmark our method, we compare its performance against several state-of-the-art (SOTA) techniques adapted to the building detection problem. The baselines include JoCoR, CoDis, SIGUA, and the Decoupling method, all of which are grounded in robust learning paradigms designed to handle noisy labels: • JoCoR: This approach trains two neural networks jointly using a co-regularization loss to reduce prediction diversity. Both networks prioritize parameter updates for small-loss samples, which indicate higher label reliability. • CoDis: In contrast to JoCoR, CoDis trains two networks in a divergence regime. It selects samples with high-discrepancy predictions between the networks, focusing on refining the most uncertain cases. • SIGUA: This method employs selective gradient updating to enhance robustness against noisy labels. • Decoupling: To prevent learning incorrect patterns from noisy data, the decoupling method separates the decision of when to update a model from how to update it, relying on prediction disagreement between classifiers. We also evaluate label-matching loss functions, such as QR and RQ losses, originally proposed for label refinement tasks, alongside these noise-robust learning methods. These loss functions explicitly model the relationship between LR labels and VHR imagery, offering a complementary perspective to our proposed pipeline. Our experimental results demonstrate the efficacy of the proposed framework in generating high-quality building footprint maps using LR labels. In terms of key metrics, the cascading architecture and noise-robust learning strategies significantly improve mapping accuracy, outperforming SOTA methods. Moreover, the framework exhibits strong generalization across diverse datasets, highlighting its potential for real-world deployment in remote sensing applications. In conclusion, this work addresses a critical bottleneck in VHR building mapping by leveraging widely available LR labels to infer fine-grained building footprints. By combining pseudo-labeling with noise-robust learning, our framework bridges the resolution gap and mitigates the challenges associated with noisy annotations. The proposed approach not only advances the SOTA in building mapping (a 15 % gain in terms of precision is achieved on average) but also opens new avenues for utilizing coarse datasets to tackle high-resolution remote sensing tasks at scale.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Guided Super-Resolution for Biomass Upsampling

Authors: Kaan Karaman, Yuchang Jiang, Dr. Damien Robert, Dr. Vivien Sainte Fare Garnot, Prof. Maria Joao Santos, Prof Jan Dirk Wegner
Affiliations: EcoVision Lab, Department of Mathematical Modeling and Machine Learning, University of Zurich, Geography, University of Zurich
Accurate Above-Ground Biomass (AGB) mapping at both large scale and high spatio-temporal resolution is essential for applications ranging from climate modeling to biodiversity assessment, and sustainable supply chain monitoring. Traditional non-invasive fine-grained AGB mapping relies on costly airborne laser scanning acquisition campaigns, usually limited to regional scales. Meanwhile, projects such as the ESA Climate Change Initiative (CCI) leverage diverse spaceborne sensors to produce global biomass estimates at a relatively low 100-meter spatial resolution. This trade-off between resolution and coverage has significant implications for ecological monitoring and policy-making since the performance of these and similar downstream tasks highly depends on both of these properties of the data. To enable high-resolution (HR) mapping globally, a common approach is to estimate a model for AGB from HR satellite observations such as ESA Sentinel-1 & 2 10-meter resolution images. To solve the same problem, we propose a novel way to address HR AGB prediction by leveraging both HR satellite observations and existing low-resolution (LR) biomass products. We cast this problem as Guided Super-Resolution (GSR), aiming at upsampling an LR biomass map (the source) using an auxiliary HR co-registered satellite image (the guide). We benchmark several existing GSR techniques against unguided upsampling methods alongside direct regression approaches on the BioMassters dataset. Our results demonstrate that Multi-Scale Guidance (MSG), the simplest yet effective deep-learning-based GSR technique, consistently outperforms direct regression from satellite imagery. MSG achieves superior performance in both regression performance metrics (-7.8 t/px RMSE, -5.7 t/px MAE) and perceptual quality scores (+2.0 dB PSNR, +0.07 SSIM) without introducing significant computational overhead. Additionally, GSR methods show higher accuracy in regions with higher biomass values, underscoring their potential for ecological applications in areas of critical importance. Another finding from our experiments reveals that unlike the RGB+Depth setting, they were originally developed for, our best-performing AGB GSR approaches are those that most preserve the guide image texture. We validate this observation through Fourier analysis, examining the frequency components in predictions of the benchmark models. This difference in texture handling between tasks highlights the need for customized GSR models for biomass estimation. Our findings not only establish the utility of GSR for AGB mapping but also open new avenues for designing new models that balance texture preservation and predictive accuracy. This study lays the foundation for scalable and precise HR biomass mapping, contributing to a better understanding of global biomass dynamics and their future implications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Landsat-8 Temperature Downscaling in Subarctic Regions Through Tree Shadow Integration

Authors: Jérôme Pigeon, Foutse Khomh, Pooneh Maghoul
Affiliations: Polytechnique Montréal
The Arctic is warming at a rate significantly faster than the global average, making detailed monitoring of land surface temperature (LST) in this region critically important. Despite advancements in satellite imaging, acquiring high-resolution thermal data remains a significant challenge. High-resolution LST data is not only crucial as a proxy for climate change but also serves as an indicator of local thermal dynamics, which directly impact permafrost stability and regional ecosystems. Currently, the MODIS and Landsat-8 satellites are the primary public sources of satellite thermal data, offering spatial resolutions of 1 km and 100 m, respectively. While adequate for global-scale studies, these resolutions fall short for engineering and localized applications, where critical thermal dynamics are aggregated together into a single pixel value. This aggregation obscures fine-scale temperature variations within each pixel. Machine learning models can leverage external knowledge about a location to approximate true sub-pixel thermal dynamics from coarse thermal images. By incorporating key variables derived from higher-resolution reflectance data, such as the Normalized Difference Vegetation Index (NDVI), Urban Index, and Snow Cover, these models can enhance the resolution of thermal distributions in lower-resolution images. However, satellite reflectance data is inherently two-dimensional, omitting critical information about vertical features like tree height, which significantly influence thermal dynamics. This limitation is particularly significant in the subarctic, where vegetation cover is sparse and highly heterogeneous, creating localized thermal variability due to the cooling effect of tree shadows. However, with Landsat-8, this information is diluted into pixel values due to the large area they represent, effectively erasing the spatial context and influence of these thermal variations. Recent advancements in deep learning and remote sensing have opened new avenues for addressing these challenges. Among these breakthroughs is the High Resolution Canopy Height Maps dataset (CHM), which offers tree height data with a1-meter spatial and vertical resolution, covering tree heights ranging from 1 to 25 meters. This study aims to improve the accuracy and spatial resolution of existing LST downscaling methods by integrating tree shadow effects with reflectance data from Landsat-8 and Sentinel-2, as well as elevation data. Tree shadows will be derived using tree height information from the CHM dataset, elevation data, and the sun’s position during Landsat-8 thermal acquisitions. Additionally, the study will evaluate the significance of incorporating tree height data in downscaling thermal imagery in subarctic regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Benchmarking Deep Learning Super-resolution Techniques for Digital Elevation Models in Mountainous Regions

Authors: Nazanin Bagherinejad, Prof. Dr. Antara Dasgupta, Uni.-Prof. Dr. sc. habil. Julia Kowalski
Affiliations: Chair of Methods for Model-based Development in Computational Engineering, RWTH Aachen University, Institute of Hydraulic Engineering and Water Resources Management, RWTH Aachen University
Digital Elevation Models (DEMs) are 3-dimensional (3D) representations of the Earth's bare terrain, encapsulating critical information like elevation, slope, and aspect of the bare ground over a 2D grid. These models are the result of a computational pipeline that transforms raw satellite or aerial data into a standardized, user-friendly format. Unlike their raw satellite or aerial data sources, DEMs are easy to interpret and do not require any specialized knowledge. They serve as an indispensable input to numerous applications across a wide range of fields, from disaster management (e.g., geohazard simulations and predictions) to hydrology (e.g., stream flow and flood inundation forecasting). Decision support systems rely heavily on the accuracy and resolution of the underlying data, as incomplete or coarse data introduces uncertainties that propagate through the system and lead to unreliable results. Low-resolution DEMs are not capable of capturing the micro terrain features, for instance, and therefore result in higher uncertainty in simulation outcomes. High-resolution DEMs, however, provide detailed topographic information that can be critical, especially in mountainous areas with abrupt and steep slopes. Thus, high-resolution DEMs play an essential role in research development and enhancing decision-support systems across many fields and applications. Recent advancements in deep learning have expanded the possibilities in the field of super-resolution. Deep architectures ranging from Convolutional Neural Networks (e.g., Super-Resolution Convolutional Neural Network (SRCNN)) to Generative Adversarial Networks (E.g., Generative Adversarial Network for Image Super-Resolution (SRGAN)) and Attention-based models have demonstrated success in enhancing the resolution of images in various domains. These models showcase unique strengths and weaknesses while delivering competitive results. In addition, training configurations, particularly the choice of loss function, can significantly affect each model's performance and outcome. Nevertheless, it is essential to acknowledge the fundamental differences between DEMs and the standard RGB (Red, Green, Blue) images. DEMs typically consist of a single channel that captures quantitative elevation values, making them more analytical tools than graphical representations. This scientific nature of DEMs highlights the importance of accuracy and precision over visual appeal, accounting for the emergence of specialized super-resolution models trained solely on DEMs. Subsequently, the necessity arises for systematic evaluations to identify the most effective super-resolution methods for DEMs. The present study aims to address this gap by performing a comparative analysis of the state-of-the-art methods and investigating their strengths and weaknesses. This research utilizes the DHM25 dataset from the Federal Office of Topography swisstopo. DHM25 is the digital height model of Switzerland, representing the topographic complexities of the Swiss Alps, which offers meaningful challenges for this analysis. Patches of 128*128 pixels were extracted from the matrix model with a 25-meter grid and subsampled by scale factors of 2 and 4 to create the low-resolution equivalences. Furthermore, as a second variation, a slight Gaussian noise was added to the low-resolution patches in order to imitate real-world datasets. This allows us to compare the robustness of the different methods. Deep learning models from three categories (CNN-based, GAN-based, and Attention-based) were trained using four variations of the DHM25 dataset, differing in scale factor and noise condition: (1) scale factor 2 with noise, (2) scale factor 2 without noise, (3) scale factor 4 with noise, and (4) scale factor 4 without noise. The performance of the models was assessed and analyzed using multiple metrics, including Mean Squared Error, Peak Signal-to-Noise Ratio, and Structural Similarity Index Measure. This comparison offers insights into various DEM super-resolution approaches and their performance, reliability, and robustness towards different scale factors/noise conditions. Results of this work will assist researchers and practitioners in selecting the most suitable approach to their DEM Super-Resolution task. Our results will be made publicly available as curated datasets with corresponding model implementations, helping move the field forward by enabling further downstream applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Trustworthy resolution Enhancement: Non-generative super-resolution of Sentinel-2

Authors: Christian Ayala, Rubén Sesma, Mikel Galar
Affiliations: Tracasa Instrumental S.L., Public University of Navarre
Observation data is becoming increasingly accessible and affordable, largely due to the Copernicus programme and its Sentinel missions. Sentinel-2, for instance, provides global multi-spectral imagery every five days at the equator, freely available for a wide range of applications. Its RGB and Near-Infrared (RGBN) bands offer a spatial resolution of 10 meters, which is sufficient for many tasks but proves inadequate for others. For this reason, enhancing the spatial resolution of these images without incurring additional costs would significantly benefit subsequent analyses. This study addresses the challenge of increasing the spatial resolution of Sentinel-2's 10-meter RGBN bands to 2.5 meters, a process known as single-image super-resolution. The proposed solution leverages a reference satellite with spectral bands highly similar to those of Sentinel-2 but with higher spatial resolution. This enables the creation of paired images at both the source and target resolutions, which are then used to train a state-of-the-art Convolutional Neural Network (CNN) capable of recovering details absent in the original bands. Unlike Generative Adversarial Networks (GANs), CNNs were chosen to avoid the introduction of synthetic artifacts or hallucinations that could negatively impact downstream analyses. Building on our previous work, where we achieved a fourfold resolution enhancement using the Enhanced Deep Residual Network (EDSR) architecture, this study introduces significant advancements. In our earlier approach, we utilized PlanetScope imagery with a native resolution of 3.125 meters, resampled to 2.5 meters, as ground truth. In this study, we shift to Geosat imagery, which provides resolutions as fine as 0.75 meters. This allows us to generate ground truth directly at 2.5 meters without the need for resampling, thereby enhancing the accuracy and reliability of the training data. The quality of the dataset is critical when developing super-resolution models, so we carefully curated the dataset generation process. As part of this, we introduced a novel harmonization phase comprising two key tasks: spatial collocation improvement and radiometric matching. For spatial alignment, we employed a deep learning-based optical flow estimation model to calculate the necessary pixel-level translations between the low- and high-resolution images. This ensures that the low- and high-resolution image pairs are perfectly co-registered. For radiometric harmonization, we applied histogram matching to each spectral band individually. Following this, we computed the average spectral angle distance as a quality metric. Patches with high radiometric discrepancies were filtered out to ensure consistency across the dataset. Finally, given advancements in the state of the art since our earlier work, we opted to use a Second-order Attention Network (SAN) to further improve feature representation and correlation learning. The SAN architecture offers superior capability in capturing intricate feature relationships, enabling more accurate detail recovery during the super-resolution process. An exhaustive experimental study was conducted to validate our proposal, including a comparison with our earlier approach. The results demonstrate that the proposed methodology outperforms existing alternatives, highlighting the feasibility of further enhancing the resolution of Sentinel-2 images by using another satellite as a reference for training a CNN. Additionally, we show that the spectral radiometry of the native Sentinel-2 bands is preserved during the super-resolution process. This ensures that the enhanced images can be seamlessly used for subsequent analyses as if they were originally acquired by Sentinel-2. The findings of this study pave the way for a wide range of applications where the native spatial resolution of Sentinel-2 imagery falls short, offering an effective solution to extend its usability in more demanding scenarios.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Super-resolution of all Sentinel-2 bands to 10 meters using parameter-free attention and cross-correlation embeddings

Authors: Julio Cesar Contreras Huerta, Cesar Luis Aybar Camacho, Luis Gómez-Chova, Simon Donike, Freddie Kalaitzis
Affiliations: Image Processing Laboratory, University of Valencia, Oxford Applied and Theoretical ML Group, University of Oxford
The Copernicus Sentinel-2 mission provides multispectral imagery with high revisit frequency and global coverage, acquiring 13 spectral bands at spatial resolutions of 10, 20, and 60 meters. While the 20-meter and 60-meter bands are critical for applications like water stress assessment, vegetation monitoring, and atmospheric correction, their coarser resolution limits their utility for tasks requiring fine-scale details, such as mapping heterogeneous vegetation or narrow water bodies. To address this issue, reference super-resolution (refSR) methods can be utilized to achieve a uniform resolution of 10 meters across all spectral bands. These methods are based on the assumption that the correlation between spectral bands remains invariant across different spatial resolutions. In the case of Sentinel-2, the refSR process at 10 meters involves training models to upscale images from 40 meters to 20 meters (i.e., scale factor x2) and from 360 meters to 60 meters (i.e., scale factor x6). Once trained, these refSR models can effectively upsample the 20-meter and 60-meter bands to a 10-meter resolution. In this study, we introduce two innovations. First, we present a comprehensive global Sentinel-2 L2A and L1C dataset comprising 100,000 samples, stratified to capture a wide range of inter-band correlations across 10-meter, 20-meter, and 60-meter bands. This dataset includes scenarios with weak inter-band correlations, offering a more realistic basis for the training and evaluation of refSR models. Second, we propose a novel convolutional neural network (CNN) that incorporates a parameter-free attention mechanism, designed to emphasize critical land covers with fine textures and land cover boundaries. Additionally, this model learns an implicit representation by aligning high-frequency details with contrasting inter-band correlation vectors. Our preliminary results indicate that this approach consistently outperforms existing refSR architectures in quantitative assessments, delivering results that are both more accurate and visually plausible.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating Generalized Strategy for Single-Image Satellite Super Resolution Using Deep Learning

Authors: Sandeep Kumar Jangir, Dr. Reza Bahmanyar
Affiliations: German Aerospace Center (DLR)
The effectiveness of remote sensing applications is critically influenced by image quality, which can be constrained by factors such as resolution, environmental conditions, and sensor-specific artifacts. Super-resolution (SR) is a fundamental computer vision task and an ill-posed inverse problem focused on enhancing the spatial resolution of images. It encompasses traditional approaches such as interpolation techniques, wavelet transformations, and sparse representation, as well as modern deep learning-based methods. SR methods typically rely on paired low- and high-resolution (LR and HR) images, where LR images are generated through bicubic downsampling of HR images. While effective on the data distribution they are trained on, these methods struggle to generalize to datasets outside the training distribution due to domain gaps between different sensors and ground sample distances (GSDs), necessitating retraining for new data [2]. In this paper, we extend our previous work [1] by demonstrating the effectiveness of super-resolution on Sentinel and similar low-resolution satellite images from RGB and multispectral sensors. The uniqueness of our approach lies in the fact that it was trained exclusively on high-resolution aerial and satellite images with GSD under 1.5 meters, yet still performs well on low-resolution images. This is made possible by the nature of the method itself, which was designed for generalized image enhancement across a variety of data sources. We use U2D2 [1], our prior framework for generalized enhancement, which was initially developed to enhance high-resolution aerial and satellite images with GSDs below 1.5 meters. The U2D2 framework utilizes a modular approach, where a deep learning-based upsampler (DLU) first performs SR and mitigates common degradations such as noise, blur, compression artifacts, and aliasing by simulating LR images during training. The upsampled images are then processed by a diffusion-based refinement module, which sharpens the image and recovers details from the original LR input. This pipeline not only produces visually improved images, but also enhances downstream applications like object detection and building/road segmentation. Our results demonstrate that the framework works effectively across a range of sensors and GSDs (from 10 cm to 1.5 m), and notably, it can super-resolve Sentinel-2 and similar low-resolution satellite images by up to 4X, from 10~m GSD to 2.5~m GSD , without requiring retraining for new sensors or GSDs. In the presentation, we would like to showcase the SR results for various low-resolution satellite image and compare them to other state-of-the-art methods. Our results demonstrate that our approach produces high-quality, natural-looking super-resolved images, offering substantial improvements in visual quality and performance for remote sensing applications. Our approach demonstrates how a modular framework can provide a scalable and robust solution for the enhancement of low-resolution satellite imagery, extending its applicability to RGB and multispectral bands. Such an approach has significant implications for remote sensing applications, enabling more accurate environmental monitoring, urban planning, and agricultural analysis, especially in scenarios constrained by limited resolution. By addressing domain gaps and offering a generalized solution, this work paves the way for broader adoption of SR techniques in remote sensing workflows. [1] Sandeep Kumar Jangir, Reza Bahmanyar, "U2D2: A Blind Super Resolution and Enhancement Framework for Aerial and Satellite Images," Elsevier, 2024. (ISPRS Journal of Photogrammetry and Remote Sensing, Submitted). [2] Peng Wang, Bahar Bayram, Elif Sertel, "A comprehensive review on deep learning based remote sensing image super-resolution methods," Earth-Science Reviews, vol. 232, pp. 104110, 2022. doi:
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluation of super-resolution results using a knowledge-based spectral categorisation system

Authors: Felix Kröber, Dirk Tiede, Martin Sudmanns, Hannah Augustin, Andrea Baraldi
Affiliations: University of Salzburg, Department of Geoinformatics, Forschungszentrum Jülich, Institute of Bio- and Geosciences, IBG-2: Plant Sciences, Spatial Services GmbH
Motivation Superresolution (SR) models are critical for enhancing the spatial resolution of satellite imagery, enabling to generate very-high resolution data in a cost-efficient manner. However, the value of SR data strongly depends on its reliability or trustworthiness. SR evaluation methods should assess the spectral fidelity of SR outputs and provide means to interpret possible systematic biases of SR outputs. Currently, evaluation of SR models [1,2] often relies on accuracy metrics such as the Root Mean Square Error or Structural Similarity Index, which are primarily focusing on intensity differences by measuring Euclidean distances between the spectral signatures of reference data and SR outputs. However, relevant spectral inconsistencies are not necessarily discoverable by employing aggregative distance metrics. Additionally, accuracy figures obtained this way offer no possibility of describing the encountered biases semantically. This limits the formulation of model applicability as well as an in-depth analysis and targeted mitigation of errors. The problems described also apply to the use of more recent perceptual metrics, such as the Learned Perceptual Image Patch Similarity [3]. To tackle this issue, we assess the incorporation of a physical, knowledge-based, spectral categorization system to facilitate the detection but also the meaningful semantic characterisation of spectral SR errors. By validating SR outputs this way alongside traditional metrics, our study aims to provide a framework for a more nuanced understanding of SR outputs, reinforcing the trustworthiness of SR products for subsequent usage in downstream applications such as land use classifications. Data & Methods The SR data has been produced within a research project focusing on the agricultural domain [4]. Specifically, a two-part model adapting the established ESRGAN+ [5] and EDSR [6] architectures was trained on pairs of Sentinel2 (S-2) and PlanetScope imagery acquired over Austria. The training and validation sets were sampled in spatially disjoint manner from the test set. The latter comprises 1235 test set tiles, each covering an area of 1.28 x 1.28 km², for which both S-2 and PlanetScope imagery are available to be compared to the SR outputs. For the purposes of evaluation, all data is resampled to the SR resolution (2.5 m) As a basis for the knowledge-based evaluation of the SR results the Satellite Image Automatic Mapper (SIAM) system [7] is used. SIAM is a fully automated, hyperparameter free decision tree for categorizing multi-spectral data. The model-based expert system is capable of employing any multispectral image data that is radiometrically calibrated to at least Top of Atmosphere (TOA) reflectance. Operating as a pointwise operator, it maps reflectances into a discrete and finite vocabulary of semi-symbolic spectral categories. The extensive multidimensional continuous data space is reduced to essential information components that can be represented as an 8-bit discrete output raster. The categories of this raster are not immediate land use or land cover classes, as these high-level semantic concepts cannot be derived in an unambiguous way on a per-pixel basis. SIAM categories instead represent an intermediate level of semantic enrichment that can be derived more directly from the spectral information. Employing SIAM in the context of SR output evaluation is based on two considerations: 1. Given its physical and semantic nature, SIAM offers enhanced interpretability of spectral signatures. Beyond the detection of intensities of changes in the spectral signature between the original product and the SR result, SIAM allows to assess the changes in terms of their type and quality (e.g. changes from vegetation-like signatures to soil-like signatures). This makes it easier to identify systematic model biases. 2. The SIAM-inherent discretization of continuous reflectances accounts for the fact that not every distance in the multivariate reflectance vector space is equally important. Starting from a given spectral signature, a multivariate displacement vector by a given metric distance x can have different implications depending on its direction. For the same x, the resulting spectral signature can either a) still characterise the same land use/cover type (e.g. variability of the vegetation signature in the mid-range infrared depending on water scarcity), b) reflect a different land use/cover type or c) transform the given signature towards a physically implausible signature. These three possible changes should be given different significance in the evaluation of SR results despite the same distance-metric change, as subsequent downstream models, e.g. for land use classification, also give different weight to these type of changes- either explicitly (knowledge-based models) or implicitly (data-based models). SIAM has several sensor modes, allowing it to be applied to a range a multispectral input images despite different available bands. For the current case, SIAM is run with 6 band inputs (R-G-B-NIR-SWIR1-SWIR2) for S-2 and SR products, and additionally with 4 band inputs (R-G-B-NIR) for PlanetScope. The output granularities depend on the chosen sensor mode except for the outputs with 33 categories, which can be calculated across all sensors. Those harmonizing 33 categories are thus used as the primary basis for all following evaluations. Results & Discussion The spectral categorization demonstrates that most SR tiles retained spectral consistency with below 40% of pixels exhibiting any spectral changes. For a more detailed consideration of the comparisons of the SR categorization with the reference data, a breakdown of the frequencies by individual spectral categories is presented. According to the selection of the tile locations over Austria with a focus on agricultural areas, approximately 80% of all pixels are categorized as vegetation-like, both in the original data and in the SR outputs. It is evident that a large proportion of the category transitions representing changes occur within supersets of categories (e.g. within vegetation-like or within bare soil-like categories). Among the changes across supersets are primarily transitions from bare soil categories in the original data to weak vegetation in the SR outputs. The complementary change, i.e., pixels categorized as weak vegetation in the original data are categorized as bare soil in the SR outputs, also occurs, albeit with lower frequency. Other severe changes involve re-categorizations of original dark soil pixels as water or shadow-like pixels in the SR outputs. The complementary process is much less pronounced here. The proportion of spectral signatures categorized as unknown according to the knowledge-based SIAM framework averages 0.17% for Planet, 0.37% for S-2, and 1.43% for the SR outputs. The SR outputs thus have an increased proportion of signatures that cannot be interpreted physically. Comparing the spectral categorization of SR outputs to S2 and PlanetScope categorizations individually, a closer alignment with PlanetScope’s categorization is evident. Quantitatively, an average of almost 40% of pixels is categorized differently when comparing SR outputs to S2 data. For the pair of SR outputs and PlanetScope data, the figure amounts to 34%. This observation aligns with the qualitative impression resulting from a visual inspection of the SR results plotting them as RGB true color composites and CIR false-color composites. Here, the SR data also seem to reconstruct the spectral patterns of Planet more closely than those of S-2. Conclusion This study underscored the potential of spectral categorisation assessment as a robust complement to traditional metrics, facilitating deeper insights into SR model performance. Quantitative insights into spectral fidelity were provided and the semantic description of encountered changes allowed to uncover systematic biases of the SR model. References [1] D. C. Lepcha, B. Goyal, A. Dogra, and V. Goyal, ‘Image super-resolution: A comprehensive review, recent trends, challenges and applications’, Information Fusion, vol. 91, pp. 230–260, Mar. 2023, doi: 10.1016/j.inffus.2022.10.007. [2] Z. Wang, J. Chen, and S. C. H. Hoi, ‘Deep Learning for Image Super-Resolution: A Survey’, IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3365–3387, Oct. 2021, doi: 10.1109/TPAMI.2020.2982166. [3] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, ‘The Unreasonable Effectiveness of Deep Features as a Perceptual Metric’, Apr. 10, 2018, arXiv: arXiv:1801.03924. doi: 10.48550/arXiv.1801.03924. [4] FFG, ‘SMAIL – Super-resolution-based Monitoring through AI for small Land parcels’. Accessed: Nov. 24, 2024. [Online]. Available: https://projekte.ffg.at/projekt/4351017 [5] N. C. Rakotonirina and A. Rasoanaivo, ‘ESRGAN+: Further Improving Enhanced Super-Resolution Generative Adversarial Network’, in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2020, pp. 3637–3641. doi: 10.1109/ICASSP40776.2020.9054071. [6] C. Lanaras, J. Bioucas-Dias, S. Galliani, E. Baltsavias, and K. Schindler, ‘Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network’, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 146, pp. 305–319, Dec. 2018, doi: 10.1016/j.isprsjprs.2018.09.018. [7] A. Baraldi, M. L. Humber, D. Tiede, and S. Lang, ‘GEO-CEOS stage 4 validation of the Satellite Image Automatic Mapper lightweight computer program for ESA Earth observation level 2 product generation – Part 2: Validation’, Cogent Geoscience, vol. 4, no. 1, p. 1467254, Jan. 2018, doi: 10.1080/23312041.2018.1467254.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Deep Learning Techniques to Enhance Spatial Resolution of Thermal Imagery for Fire and Cloud Detection

Authors: Valentina Kanaki, Stella Girtsou, Aggelos Georgakis, Vassilia Karathanassi, Dr. Charalampos Kontoes
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), BEYOND Center of Earth Observation Research and Satellite Remote Sensing, National Technical University of Athens (NTUA), School of Rural and Surveying Engineering, Remote Sensing Laboratory
In recent years, the field of Remote Sensing and satellite imagery has gained significant attention. Among the critical applications of Remote Sensing, the timely detection and management of wildfires have become increasingly important due to the growing environmental and social impacts. This effort is closely linked to the 2030 Agenda and the Sustainable Development Goals (SDGs), particularly Goal 13 for climate action and Goal 15 for life on land, as wildfires destroy biodiversity and exacerbate climate change. Geostationary satellites, such as Meteosat Second Generation (MSG), enable the detection and monitoring of thermal anomalies of wildfires with a refresh frequency ranging from 5 to 15 minutes. This frequency meets the needs of wildfire response agencies, offering details about the fire's timing, radiative power, and location. However, despite the high temporal resolution of thermal images, their limited spatial resolution can hinder the early detection of wildfires. This spatial resolution ranges from 3 km at the Equator to 4.5 km at Mediterranean latitudes. Furthermore, cloud masks play a crucial role in satellite imagery analysis. While they often cover the satellite's observation target, thus hindering data collection and analysis, they also offer valuable insights into atmospheric conditions, precipitation patterns, and climate change. Moreover, in solar energy applications clouds are a major factor as they are significantly affecting the solar radiation reaching the Earth's surface. Therefore accurately detecting cloud coverage in satellite images using techniques like spatial downscaling is essential for optimizing many EO applications, such as weather nowcasting and wildfire monitoring. Super Resolution using deep learning techniques aims to overcome these limitations by enhancing the spatial resolution of satellite images. The objective of this study is to create a comprehensive dataset and conduct a qualitative and quantitative comparison of deep learning techniques, such as SRCNN and SRGAN, for improving the spatial resolution of thermal images and cloud masks generated by the SEVIRI sensor on the MSG geostationary satellite. Dataset curation. This project exploits two data sources: 1. The MODIS level 1B calibrated observations (MOD021KM/MYD021KM), which are converted to spectral radiances for two of the 36 standard resolution channels and the MODIS cloud mask (MOD35_L2/MYD35_L2). 2. The SEVIRI level 1.5 calibrated observations (Rapid Scan High Rate SEVIRI Level 1.5 - MSG) for the 12 standard resolution channels and the SEVIRI cloud mask (Rapid Scan Cloud Mask - MSG). The dataset was created based on MODIS active fires measurements. The study focuses on Greece, using data from 2018 to 2023. First, we selected the MODIS scenes with high fire intensity (large number of active fire pixels) and matched it with the closest SEVIRI image in time. We performed the necessary geoprocessing of the AQUA/MODIS and MSG/SEVIRI data to transform them to NetCDF files. After geoprocessing, the SEVIRI and MODIS images can be aligned using their coordinate information. Special care is taken to perform temporal alignment. As SEVIRI rapid scan measures every 5 minutes, the MODIS images are matched with the closest SEVIRI measurement in time. The cloud mask dataset was created using a similar methodology as the one used for the active fires dataset. Cloud masks from MODIS were collected and matched with the SEVIRI cloud masks that shared corresponding capture times. Then we created patches and selected patches based on a) number of active fires, b) on the presence of clouds for training models to apply SR for active fire detection and SR for refined cloud masks respectively. Dataloaders. We created flexible data loaders, to configure different bands and sensors for enabling large experimentation spaces. For the experiments where the SR exploited the MODIS bands, the SEVIRI images were downscaled by bicubic interpolation, to match MODIS spatial resolution. Methods. In our initial study, we used two models for SR tasks: SR Convolutional Neural Network (SRCNN) and SR Generative Adversarial Network (SRGAN). SRCNN, our shallow baseline model, is a convolutional neural network that is specially configured for image SR tasks and is trained using the Mean Square Error (MSE). SRGAN adopts a generative adversarial network (GAN) framework, utilizing a generator and a discriminator and is optimized on a perceptual loss function, balancing content loss (MSE) with adversarial loss. During training, we also monitored peak signal-to-noise (PSNR) and structural similarity (SSIM) scores. We tested two different data combinations: 1. SEVIRI - MODIS: In this scenario, the models learned to map lower-resolution SEVIRI data to the higher-resolution MODIS images. 2. SEVIRI - upsampled SEVIRI: Here, we used the Wald’s protocol. The low-resolution SEVIRI images were first upscaled using bicubic interpolation to match the target resolution. The original high-resolution SEVIRI images served as ground truth, while the bicubic-upsampled versions were used as input. Our experiments were based mainly on the channels 21,22,31 from MODIS and IR3.9, IR10.8 from SEVIRI, chosen for their relevance to our application. Initial results indicate that the SEVIRI-MODIS mapping approach outperforms the self-supervised SEVIRI method, suggesting that leveraging external high-resolution data significantly improves super-resolution performance. Future work. To build upon these findings, we plan to broaden the scope of our experiments. Our next steps include expanding the dataset to cover additional years and applying our methods on a larger European scale. This extended analysis will help us better generalize our models and enhance their robustness across diverse geographic and temporal conditions. Furthermore, we plan to explore additional downscaling techniques that have demonstrated effectiveness with remote sensing data to further improve the accuracy and applicability of our models. Acknowledgements. The ESA-funded ASIMOV project addresses the challenge of the coarse spatial resolution of Essential Climate Variables (ECVs) by using AI super-resolution techniques. These techniques utilize both satellite Earth Observation (EO) data and other non-EO data sources. The super-resolution methods are applied in two key use cases: fire risk prediction and real-time wildfire detection. By leveraging these diverse data sources, ASIMOV generates highly informative, super-resolved versions of the ECVs. The project is led by the National Observatory of Athens, supported by NTUA ICCS and WIRELESSINFO.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Resource-Efficient Super-Resolution for Sentinel-2 Imagery Using Modular Auto-Encoders and U-Net Architectures

Authors: Thomas Cusson, Thomas Corpetti, Antoine Lefebvre
Affiliations: KERMAP, CNRS UMR LETG
NIMBO (https://nimbo.earth/) , developed by KERMAP, is an advanced Earth observation platform that delivers high-quality, GIS-ready basemaps derived from Sentinel-2 data. These monthly basemaps provide consistent and comprehensive coverage of the Earth's surface, supporting applications in urban planning, agriculture, and environmental monitoring. However, the 10 meters spatial resolution of Sentinel-2 data can limit its utility for application requiring finer spatial detail. To overcome this limitation, we have developed a modular and efficient super-resolution framework capable of enhancing Sentinel-2 images to a 2.5 meters resolution, achieving a x4 improvement in spatial detail while maintaining low computational cost. Our approach integrates seamlessly with NIMBO’s processing pipeline, ensuring scalability and resource efficiency. The framework leverages a modular architecture combining an auto-encoder and a U-Net, designed to address the computational and data challenges of super-resolution. The process begins by training an auto-encoder on high-resolution (HR) images to create a compact bottleneck representation. This bottleneck significantly reduces data complexity while retaining the essential features needed for reconstruction. Next, a U-Net, equipped with attention mechanisms and residual connections, is trained to map low-resolution (LR) Sentinel-2 inputs to this bottleneck representation. Finally, the decoder reconstructs HR images, capturing fine spatial details and preserving critical edge features. The proposed framework offers several key advantages: - Enhanced basemap utility: By improving spatial resolution, the framework produces more detailed basemaps, better suited for precision-demanding applications. - Efficiency and scalability: The modular design reduces computational requirements and aligns with NIMBO’s goal of resource-efficient data processing for large-scale operations. - Generalization with limited data: The auto-encoder ensures robust HR image reconstruction, even with limited training samples, by enforcing a structure that mimics true HR image distributions. This work not only enhances the quality of NIMBO’s basemaps but also demonstrates the potential of combining deep learning architectures for scalable and efficient satellite image super-resolution. By addressing both scientific and operational challenges, our method positions NIMBO as a leader in delivering high-resolution satellite-derived products, and provides a flexible framework adaptable to other remote sensing tasks.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: AI-Driven Super-Resolution in Earth Observation: Addressing Domain Shift and Uncertainty in Thermal Data Analysis

Authors: Pauline Hecker, Hannes Baeuerle, Shivali Dubey, Dr. Peter Kuhn
Affiliations: Fraunhofer Ernst-Mach-Institut, EMI
The proliferation of Artificial Intelligence (AI) in Earth System Science and Earth Observation (EO) is revolutionizing research methodologies and fostering innovation at an unprecedented rate. Yet, to harness the full potential of AI in Earth Action initiatives, it is imperative that AI solutions exhibit explainability, physics-awareness, and trustworthiness. This ensures that the outcomes are reliable and fit for intended purposes, especially in critical areas such as climate and environmental monitoring. Our motivation stems from the critical need for high-resolution (HR) satellite data within numerous scientific and commercial sectors. Acquiring such data, however, often involves prohibitive costs. AI-driven Super-Resolution (SR) techniques are a promising solution, enabling the generation of HR-images from low-resolution (LR) inputs. However, the HR training data required for the development of SR models in critical sectors is frequently sparse or nonexistent. Here, it would be convenient if we could construct SR models for low quantity data domains using models fitted on high quantity data domains. However, such an approach requires a validation that the domain shift does not introduce high prediction uncertainties. An emerging field of research is the application of SR techniques to thermal satellite imagery. The SR-techniques are applied to both raw thermal radiation data and processed products like Land Surface Temperature (LST). In this context, the objective of our research is to develop SR algorithms for the thermal satellite data product LST, that generalize between data domains and incorporate uncertainty quantification. We systematically investigate the impact of domain shift on predictive uncertainty, as assessed by standard measures as well as an innovative usage of cycle loss which preserves important features during the aforementioned image translation process and is known from image generation and style transfer using cycleGAN. Our research therefore focuses on developing a guided SR approach, where not only pairs of HR- and LR-images, but also physical relationships between individual bands and band combinations with the land surface temperature are incorporated into the model. Exemplarily, the structural and feature-based similarities between LST and the Normalized Difference Built Index (NDBI) can be utilized to first learn the mapping between the two types of images at low resolution. This serves as the domain adaptation step, which can then facilitate a domain shift at low resolution, ultimately allowing us to generate high-resolution thermal images. Furthermore, the red, green, and blue color channels, along with combinations of these with the near- and shortwave-infrared bands of Landsat-8/9, are used for model training. Landsat-8/9 produce LST data in 100m resolution and visible as well as near- and shortwave-infrared data in 30m resolution, which represent the guidance data. The HR-domain of the guidance data is to be learnt by the models, and to be adapted by the LR-LST data. As a result, the LST’s resolution domain will have changed to the 30m guidance data domain. This means a Super-Resolution factor of ~3. To ensure reliability, we develop two strands of uncertainty quantification for our model. First, we use conservative measures like Monte Carlo Dropout and model ensembling to measure the model’s epistemic uncertainty. Second, we attempt to shed additional light on the uncertainty arising from the intended domain shift by training using invertible or cyclic generative model architectures, like cycleGANs and invertible neural networks. These models can potentially expedite the domain generalizability from the cycle loss, which enforces the cyclic or invertible property on a model to not only perform image translations from the source domain to the target domain, but also to reconstruct an image in the source domain from the target domain without losing information about important features. We systematically validate this approach by testing it on LR-data domains where the ground truth is available and compare it to the established methods of evaluating predictive uncertainty arising from domain shifts. We thus demonstrate that we can produce a reliable upper bound on the error expected from a domain shift. The study focuses on the 15 most populated cities of Europe. This geographic focus ensures that the model is tested in various urban environments, providing a robust assessment of its applicability across different contexts. In summary, we employ a guided SR model to predict HR-images from LR-images, together with other EO data by leveraging the cycle loss mechanism in Generative AI. Our paper will systematically investigate how effectively the models generalize to HR-thermal domains where data is sparse. The overall objective is to evaluate how well the models can learn a mapping from surface features to thermal data to then employ these models to produce HR-thermal images without access to training data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.05.03 - POSTER - ALTIUS: ESA's Ozone Mission

In the 1970s, scientists discovered that the ozone layer was being depleted, particularly above the South Pole resulting in what is known as the ozone hole. To address the destruction of the ozone layer the international community established the Montreal Protocol on ozone-depleting substances. Since then, the global consumption of ozone-depleting substances has reduced by about 98%, and the ozone layer is showing signs of recovering. However, it is not expected to fully recover before the second half of this century. It is imperative that concentrations of stratospheric ozone, and how they vary according to the season, are monitored continually, to not only assess the recovery process, but also for atmospheric modelling and for practical applications including weather forecasting.
The Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) mission fills a very important gap in the continuation of limb measurements for atmospheric sciences. The ALTIUS mission will provide 3-hour latency near-real time ozone profiles for assimilation in Numerical Weather Prediction systems, and consolidated ozone profiles for ozone scientific analysis. Other trace gases and aerosols extinction profiles will also be provided.
The focus of this session is the mission and its status, together with the implemented technical and algorithmic solutions to image the Earth limb and retrieve the target chemical concentration, as well as the ongoing preparations for the calibration/validation of the mission products. )
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: ALTIUS Ozone Retrieval Algorithm in Bright Limb Mode Validated using OMPS LP Observations

Authors: Sotiris Sotiriadis
Affiliations: Belgian Institute Of Space Aeronomie
ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) is an atmospheric limb mission being implemented in ESA's Earth Watch program and planned for launch in 2026. The instrument consists of three imagers: UV (250-355 nm), VIS (440-675 nm) and NIR (600-1040 nm) channels. Each channel is able to take a snapshot of the scene independently of the other two channels, at a desired wavelength and with the requested acquisition time. The agility of ALTIUS allows for series of high vertical resolution observations at wavelengths carefully chosen to retrieve the vertical profiles of species of interest. ALTIUS will perform measurements in different geometries to maximize global coverage: observing limb-scattered solar light in the dayside, solar occultations at the terminator, and stellar, lunar, and planetary occultations in the nightside. The primary objective of the mission is to measure high-resolution stratospheric ozone concentration profiles. This work concerns the bright limb mode and the validation of the ALTIUS L2P algorithm using the Ozone Mapping and Profiler Suite Limb Profiler (OMPS LP) L1 data. The OMPS LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km and these data were use for retrieval of ozone profiles from cloud tops up to 55 km. We perform end-to-end simulations to examine the robustness of the L2P limb algorithm using L1 OMPS LP data. We assume no prior knowledge of the rest of the atmosphere in our tests. We compare our retrieved ozone profiles with the ones from the OMPS algorithm and discuss potential disagreements and biases in the results. In our study, we generate artificial stimuli from the OMPS L1 signals where the ozone, temperature, and pressure profiles are coming from OMPS L2 products. Then we feed these stimuli into our system performance simulator (SPS) and an ALTIUS L1 product, based on the latest description of the instrument, is generated and it is passed our L2 processor.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Feasibility of BrO and OClO Retrievals in ALTIUS' Solar Occultation Mode: Key Challenges and Solutions

Authors: Kristof Rose, Noel Baker, Dr. Antonin Berthelot, Didier Fussen, Nina Mateshvili, Didier Pieroux, Sotiris Sotiriadis, Emmanuel Dekemper
Affiliations: BIRA-IASB
The Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere (ALTIUS) is an ozone monitoring mission under ESA’s Earth Watch Programme. Scheduled for launch in 2026-2027 aboard Vega-C, ALTIUS addresses the observational gap following ENVISAT’s decommissioning in 2012, and the looming discontinuation of very successful limb missions such as MLS/Aura, OSIRIS/Odin, and ACE-FTS/SciSat. ALTIUS is designed for versatile atmospheric measurements, utilizing limb scattering and solar occultation on the dayside of the orbit, and stellar, lunar, and planetary occultations on the nightside. The ALTIUS payload, mounted on a PROBA platform, consists of three imagers: UV (250–355 nm), VIS (440–675 nm), and NIR (600–1040 nm) channels. Each imager can independently capture images at desired wavelengths and acquisition times, allowing for optimal wavelength and acquisition time selection. This feature enhances vertical resolution, enabling the retrieval of vertical profiles of various chemical species, including but not limited to O₃, NO₂, and aerosols. While the mission’s primary goal is to retrieve high-resolution ozone profiles, the versatile imagers also grant us the possibility to measure secondary species such as BrO and OClO, which play critical roles in stratospheric ozone depletion. Given their low abundance in the stratosphere, only the solar and lunar occultation methods have a chance to detect their presence. This study evaluates the feasibility of retrieving BrO and OClO using ALTIUS’ solar occultation chain. Specifically, our objectives are: 1) Identifying the (ALTIUS-specific) optimal measurement vector for BrO and OClO retrievals, minimizing interference from more abundant species such as O₃ and NO₂. 2) Assess whether the signal-to-noise ratio for these species is sufficient in single measurements or if temporal averaging (e.g., daily, weekly, or monthly) is required for meaningful profiles. By addressing these challenges, this study aims to enhance ALTIUS’ scientific contributions, broadening its scope beyond primary mission objectives and advancing our understanding of these trace species critical to ozone chemistry.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: B.04.05 - POSTER - Remote sensing for disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters

Every year, millions of people worldwide are impacted by disasters. Floods, heat waves, droughts, wildfires, tropical cyclones and tornadoes cause increasingly severe damages. Civil wars and armed conflicts in various parts of the world, moreover, lead to a growing number of refugees and large changes in population dynamics. Rescue forces and aid organizations depend on up-to-date, area-wide and accurate information about hazard extent, exposed assets and damages in order to respond fast and effectively. In recent years, it has also been possible to prepare for specific events or to monitor vulnerable regions of the world on an ongoing basis thanks to the rapidly growing number of satellites launched and their freely available data. Providing information before, during or after a disaster in a rapid, scalable and reliable way, however, remains a major challenge for the remote sensing community.
Obtaining an area-wide mapping of disaster situations is time-consuming and requires a large number of experienced interpreters, as it often relies on manual interpretation. Nowadays, the amount of remote sensing data and related suitable sensors is steadily increasing, making it impossible in practice to assess all available data visually. Therefore, an increase of automation for (potential) impact assessment methods using multi-modal data opens up new possibilities for effective and fast disaster response and preparedness workflow. In this session, we want to provide a platform for research groups to present their latest research activities aimed at addressing the problem of automatic, rapid, large-scale, and accurate information retrieval from remotely sensed data to support disaster preparedness and response to geo-hazards, hydro-meteorological hazards and man-made disasters/conflicts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: First Assessment of Electronic Corner Reflectors for Dam Monitoring in Germany – A Case Study

Authors: Jonas Ziemer, Jannik Jänichen, Carolin Wicker, Katja Last, Prof. Dr. Christiane Schmullius, Clémence
Affiliations:
Regular deformation monitoring is a key task for dam operators, given its fundamental socio-economic and environmental importance. In Germany, dam monitoring programs encompass various in situ geodetic methods, such as plumb measurements and trigonometric surveys, to ensure safe operation (DWA, 2011; DIN, 2004). While plumb data provide the highest accuracy (Bettzieche, 2020), these systems are not installed on all dams. Trigonometric data offer an alternative, but due to the high costs and substantial time investment, field campaigns are typically conducted only once or twice a year. This practice limits monitoring capabilities to the detection of long-term deformations. Technical advances in differential synthetic aperture radar interferometry (DInSAR) address these challenges, providing multiple observation points, known as persistent scatterers (PS), on the dam with higher temporal resolution. Interferometric methods, such as Persistent Scatterer Interferometry (PSI), can detect deformations with millimeter-level precision. The number of detected scatterers primarily depends on the material properties of the object. In this context, the type of dam is one of the most important factors for successful PS identification. Different dam types impound German reservoirs, leading to considerable variations in the number of detected scatterers. Gravity dams made of masonry or concrete typically provide suitable conditions for PS-based monitoring. In contrast, embankment dams, often covered by vegetation, present less favorable conditions due to decorrelation effects, resulting in fewer detected PS points. To facilitate monitoring of such dams, corner reflectors can be deployed, serving as stable, high-intensity reflection scatterers that enhance the reliability and accuracy of DInSAR measurements for deformation monitoring. Numerous studies have employed passive corner reflectors for infrastructure monitoring (Kelevitz et al., 2022; Sage et al., 2022). However, these devices often face limitations due to their large size and weight, conspicuous appearance, reliability issues stemming from geometric variations, and material degradation or maintenance challenges over extended periods of use (Mahapatra, 2014). Consequently, smaller, lighter, and less conspicuous radar transponders, known as electronic corner reflectors (ECRs), have been developed and are particularly well-suited for publicly accessible infrastructures. Their capability to cover ascending and descending tracks with a single unit, instead of requiring a dual, opposite-facing reflector setup (Fotiou and Danezis, 2020), makes them ideal for dam monitoring applications. This study provides initial insights into the assessment of Sentinel-1 C-band PS time series obtained using electronic corner reflectors for dam monitoring in Germany. The analysis is conducted on several dams in North Rhine-Westphalia, western Germany, and spans a period of up to two years beginning in January 2023. PS data in the sensors’ line of sight are compared with in situ geodetic measurements to evaluate consistency. Preliminary results indicate promising benefits for dams where few natural scatterers are detected. References - Bettzieche, V. (2020). Satellitenüberwachung der Verformungen von Staumauern und Staudämmen. Wasserwirtschaft, 9, 48-51. https://doi.org/10.1007/s35147-020-0424-9 - Fotiou, K., & Danezis, C. (2020). An overview of electronic corner reflectors and their use in ground deformation monitoring applications. In Eighth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2020) (Vol. 11524, pp. 216-224). SPIE. https://doi.org/10.1117/12.2571886 - German Institute for Standardization (DIN) (2004). 19700-10: 2004-07, Stauanlagen-Teil 11: Talsperren; Beuth Verlag GmbH: Berlin, Germany. https://dx.doi.org/10.31030/9560336 - German Association for Water, Wastewater and Waste (DWA) (2011). Bauwerksüberwachung an Talsperren. DWA-Merkblätter, Nr. M 514. - Kelevitz, K., Wright, T. J., Hooper, A. J., & Selvakumaran, S. (2022). Novel corner-reflector array application in essential infrastructure monitoring. IEEE Transactions on Geoscience and Remote Sensing, 60, 1-18. https://doi.org/10.1109/tgrs.2022.3196699 - Mahapatra, P. S., Samiei-Esfahany, S., van der Marel, H., & Hanssen, R. F. (2013). On the use of transponders as coherent radar targets for SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing, 52(3), 1869-1878. https://doi.org/10.1109/tgrs.2013.2255881 - Sage, E., Holley, R., Carvalho, L., Miller, M., Magnall, N., & Thomas, A. (2022). InSAR monitoring of a challenging closed mine site with corner reflectors. In Mine Closure 2022: Proceedings of the 15th International Conference on Mine Closure (pp. 779-788). Australian Centre for Geomechanics. https://doi.org/10.36487/acg_repo/2215_56
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Rapid identification of disaster hotspots by means of a geospatial information fusion from remote sensing and social media

Authors: Marc Wieland, Sebastian Schmidt, Bernd Resch, Dr. Sandro Martinis
Affiliations: German Aerospace Center (DLR), German Remote Sensing Data Center (DFD), University of Salzburg, Department of Geoinformatics (Z_GIS)
Effective management of complex disaster scenarios relies on achieving comprehensive situational awareness. Recent disasters, such as the 2024 floods in Southern Germany, have highlighted the critical need for timely geoinformation to protect communities. During the response phase, it is vital to rapidly identify the most affected areas to guide emergency actions and allocate limited resources effectively. This process is typically iterative, incorporating continuous updates as new or improved information becomes available. Initial estimates, often based on incomplete or imprecise data, play a crucial role in forming an early situational overview before detailed damage assessments are conducted. Early-stage proxies, such as population distribution and hazard zones, can support planning data collection efforts, enhancing situational understanding and focusing response efforts efficiently. This study introduces a method for rapidly identifying disaster hotspots, particularly in scenarios where detailed damage assessments or very high-resolution satellite imagery are not (yet) available. The approach leverages the H3 discrete global grid system and employs a log-linear probability pooling method with an unsupervised hyperparameter optimization routine. It integrates flood hazard data from systematically acquiring high-resolution satellite imagery (Sentinel-1 and Sentinel-2), disaster-related information from X (formerly Twitter), and freely accessible geospatial data on exposed assets. The method’s effectiveness is assessed by comparing its outputs to detailed damage assessments from five real-world flood events (USA August 2017, Mozambique 2019, Mexico November 2020, Germany July 2021, Pakistan September 2022). Results demonstrate that disaster hotspots can be identified using readily available proxy data. An extensive hyperparameter analysis revealed that while equal-weight methods offer simplicity and effectiveness, optimized pooling weights generally yield superior results. Context-specific tuning was shown to be critical for optimal performance in log-linear pooling. Notably, an unsupervised method minimizing the Kullback-Leibler divergence between input distributions and predictions outperformed supervised approaches, overcoming the limitations of training data. This method’s transparency and adaptability allow it to incorporate geospatial layers with varying resolutions and semantic relevance, making it particularly suitable application to other hazards (e.g., landslides, wildfires, earthquakes) or exposed assets (e.g., roads, railways, critical infrastructure).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Satellite-Based Methodology for Assessing Wildfire Defensibility of Buildings in France

Authors: Luc Mercereau, Arnaud Broquard, Simon Lamy, Aurélien de Truchis, Julien Camus, Philippe Meresse, Marjorie Sampsoni
Affiliations: Kayrros SAS, Entente Valabre
The increasing frequency and severity of wildfires demand robust tools for assessing the defensibility of structures, a measure of their capacity to be defended during a wildfire event. Traditional defensibility assessments focus on three key components: accessibility (ease of approach for firefighting units), vegetation clearing (removal of flammable materials around structures), and proximity to defending infrastructure (such as water reservoirs or firefighting stations). These assessments, however, are typically conducted through ground inspections, which are labor-intensive and challenging to scale across large regions. Defensibility is a critical concept for firefighters, allowing the identification of the most vulnerable buildings and informing strategic and tactical decisions for wildfire prevention and crisis management. Analysis conducted by wildfire management experts reveals that approximately 80% of buildings with properly cleared surroundings within a 50-meter radius are spared when a wildfire occurs nearby. This is primarily due to the disruption of fuel continuity and, most importantly, the improved conditions for firefighting operations. Conversely, more than 90% of buildings with uncleared surroundings are destroyed in the event of a major wildfire in close proximity. These findings highlight the importance of vegetation management in enhancing building defensibility and mitigating the impact of wildfires. In this study, we propose a novel methodology that integrates satellite-based vegetation clearing analysis using Sentinel-2 imagery to enhance the defensibility scoring process. Sentinel-2's high-resolution, multispectral data enables accurate quantification of vegetation clearing around structures. This satellite-derived data is combined with field insights from firefighter inspections conducted in various regions of France. The study highlights the correlation between satellite-based assessments and on-the-ground evaluations, demonstrating strong agreement between the two approaches. Our methodology was applied to several test areas in wildfire-prone regions of France. Results show that satellite-derived defensibility scores reliably capture critical risk factors while significantly reducing the time and resources needed for large-scale assessments. This approach also supports continuous monitoring of vegetation regrowth, allowing for updated risk assessments over time. By improving scalability and consistency in defensibility evaluations, this methodology offers a powerful tool for wildfire crisis management. It provides actionable insights for prioritizing resources, optimizing firefighting strategies, and developing preventive measures. Ultimately, this study bridges operational expertise from firefighters with advanced satellite technology, contributing to more effective and efficient wildfire preparedness and response.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating the Risk of Damage to Traditional Timber Houses Caused By Tropical Cyclones in Madagascar, a Cyclone Enawo (2017) Case Study.

Authors: Holly Moore, Dr Thomas Reynolds, Professor Alexis Comber
Affiliations: University Of Edinburgh, University of Leeds
Traditionally built ‘non-engineered’ timber houses constructed in coastal regions of Madagascar are frequently damaged or destroyed by tropical cyclone hazards such as extreme wind speeds, storm surge and waves. The risk of damage to traditional timber houses is investigated using a multi-hazard structural fragility predictive model. The model calculates the probability of component-level failure for a representative building archetype from wind, storm surge and breaking wave loading of increasing intensities comparable to varying cyclone strengths. A Monte-Carlo simulation method is utilised where 10,000 random samples of component resistance measurements collected during an in-country experimental campaign [1] are taken to incorporate uncertainties of component strength. The building components under investigation are the embedded columns that form the foundation system, and mortise-tenon (MT) joints that connect the roof to the vertical structure, failure of which will cause severe damage to the house [1]. The model is validated through hindcasting of past cyclone case studies and compared to damage maps produced from object detection of very high resolution (VHR) Pleiades optical satellite imagery and post-event reports. This research focuses on the impacts of Intense Tropical Cyclone Enawo that made landfall on the 7th of March 2017 on the north-east coast of Madagascar as an equivalent category 4 cyclone on the Saffir-Simpson scale [2]. Wind, wave and storm surge intensities are modelled for two different study sites: Antalaha, located closest to cyclone landfall; and Fénérive-Est, located further south. Severe damage to the foundation system is categorised as the embedded columns rotating > 25° caused by wind and/or breaking wave loading [1]. The probability of failure due to wind loading was derived for both study sites using equations from Eurocode 1 [3] and wind speed data from a global reanalysis product from the E.U. Copernicus Marine Service (CMEMS) [4]. Antalaha experienced higher 10-min maximum sustained wind speeds compared to Fénérive-Est, producing a probability of failure of 99.8% for houses with column embedment depths of 50cm, while Fénérive-Est had a probability of failure of 50.7%. The probability of embedded columns rotating > 25° from breaking wave loading was computed utilising equations from FEMA (2011) [5], ground elevations above geoid from the GLO-90 DSM dataset [6], sea surface height data and significant wave height data from CMEMS reanalysis products [7,8]. Breaking wave heights for coastal areas < 5m above sea level were calculated to reach a maximum of 1.16m in Antalaha and 0.19m in Fénérive-Est, producing probabilities of foundation failure of 98.2% in Antalaha and 0.3% in Fénérive-Est, where column embedment depths were 50cm. Model results suggest traditional timber houses located in Antalaha had a higher probability of foundation failure from wind and breaking wave loading during Cyclone Enawo in comparison to Fénérive-Est. Results correspond well to damage statistics from post-event reports [2], which reported that in the Sava region (which includes the Antalaha district) 34,894 houses were destroyed, compared to the Analanjirofo region (which includes the Fénérive-Est district) where 1,845 houses were destroyed. The model can be utilised to assess cyclone risk to Malagasy building stock and can be further adapted to assess the efficacy of simple and cost-effective building strengthening strategies. Strengthening strategies currently advised by the Croix-Rouge Malagasy (CRM) include increasing column embedment depths to 75cm and wrapping connections with metal wire. The results can then be utilised to inform construction guidelines to be disseminated to local communities to improve cyclone resilience of traditional Malagasy houses. References: [1] Taleb, R., et al., 2023. Fragility assessment of traditional wooden houses in Madagascar subjected to extreme wind loads. Engineering Structures 289, 116220. https://doi.org/10.1016/j.engstruct.2023.116220 [2] Probst, P., et al., 2017. Tropical Cyclone Enawo: post event report: Madagascar, March 2017. Publications Office of the European Union, Ispra (Italy). [3] British Standard, 2005. Eurocode 1: Actions on Structures - Part 1-4: General actions - Wind actions. [4] Global Ocean Hourly Reprocessed Sea Surface Wind and Stress from Scatterometer and Model. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00185 [5] FEMA, 2011. Coastal Construction Manual. Principles and Practices of Planning, Siting, Designing, Constructing, and Maintaining Residential Buildings in Coastal Areas (Fourth Edition). FEMA P-55, Volume 1. [6] Copernicus DEM - Global and European Digital Elevation Model. https://doi.org/10.5270/ESA-c5d3d65 [7] Global Oceans Physics Reanalysis Model. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00021 [8] Global Ocean Waves Reanalysis Dataset. E.U. Copernicus Marine Service Information (CMEMS). Marine Data Store (MDS). https://doi.org/10.48670/moi-00022
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Optimizing Dam Monitoring: Validation and Optimization of the CR-Index for PSInSAR and Electronic Corner Reflector (ECR) Integration

Authors: Jannik Jänichen, Jonas Ziemer, Daniel Klöpper, Carolin Wicker, Sebastian Weltmann, Marco Wolsza, Nora Fischer, Katja Last, Prof. Dr. Christiane Schmullius, Dr. -Ing Clémence Dubois
Affiliations: Friedrich Schiller University Jena, Ruhrverband, Department for Water Economy, Institute of Data Science, German Aerospace Center
Monitoring the structural integrity of dams is crucial to ensure safety and mitigate risks for surrounding areas. Traditional methods relying on geodetic measurements, such as plumb measurements, trigonometry, or GNSS technology, are often costly and limited by accessibility. Satellite-based remote sensing techniques, particularly Persistent Scatterer Interferometry (PSI), present a promising alternative, offering cost-effective and precise deformation measurements over large areas. However, its applicability can be constrained by geometric and topographic factors. To address these limitations, the CR-Index was developed, combining geometric data with land use information to assess the suitability of PSI for dam monitoring. This study applies the CR-Index to dams in North Rhine-Westphalia, western Germany, focusing on the Möhne Dam and Bigge Dam. These exhibit distinct structural properties and environmental conditions. The Möhne Dam, a gravity dam made of masonry, and the Bigge Dam, a dam with vegetation cover, served as ideal test cases to assess the practical application of the CR-Index for monitoring dams with varying surface characteristics and orientations. The analysis utilizes Sentinel-1 data, including local incidence angles and metadata, alongside high-resolution Digital Elevation Models (DEMs) and land use data provided by the Geodata Infrastructure of North Rhine-Westphalia. The CR-Index, an evolution of the earlier geometry-based R-Index, incorporates a land use component to provide a more comprehensive evaluation of dam observability via PSI. This combined approach allows for a more tailored assessment, considering both geometric properties and land cover types, which are critical in determining PS point density. Results demonstrate that the CR-Index accurately identifies the suitability of different dam sections for PSInSAR analysis. The wall of the Möhne Dam consistently showed high CR-Index values, ensuring excellent observability in both ascending and descending satellite tracks. Conversely, the Bigge Dam exhibited more variability, particularly between its water-facing and air-facing surfaces. Areas with dense vegetation showed lower CR-Index values, while asphalted or exposed surfaces were more conducive to PSInSAR observation. The dams' topography and orientation significantly influenced CR-Index outcomes, with descending satellite tracks generally providing better results due to favorable incidence angles. These findings highlight the potential to enhance observability in challenging areas through complementary devices, such as electronic corner reflectors (ECRs), to further improve the density and quality of PS points. Validation of the CR-Index was conducted using PS data provided by the German Ground Motion Service (Bodenbewegungsdienst Deutschland, BBD). By comparing BBD point densities to CR-Index values, a strong correlation was observed: Areas with higher CR-Index values corresponded with higher PS densitiy, thus validating the CR-Index's predictive accuracy. For both ascending and descending tracks, PS point density increased notably at CR-Index values between 60 and 90. In the descending direction, PS point density reached up to 25 points per hectare, demonstrating the effectiveness of the CR-Index in predicting PS-rich areas and supporting its utility in developing optimal monitoring strategies. This study underscores the importance of the CR-Index as a tool for preliminary site selection and observation strategy development in PSInSAR analyses. By integrating geometric and land use parameters, the CR-Index provides a robust basis for tailoring monitoring approaches to individual dams, allowing for more efficient and effective use of PSInSAR technology. Future work will extend the validation to a broader range of dam types and environmental conditions, refining the CR-Index for more generalized application across various types of infrastructure. This research provides valuable insights into dam monitoring using remote sensing technologies and establishes a foundation for advancing the monitoring of critical infrastructure, thereby enhancing both the safety and sustainability of dam management practices. References BGR (2019): BodenBewegungsdienst Deutschland – BBD, https://www.bgr.bund.de/DE/Themen/GG_Fernerkundung/BodenBewegungsdienst_Deutschland/bodenbewegungsdienst_deutschland_node.thml. (Last access: 11/2024). BGR (2022): Nutzungshinweise BBD Sentinel-1 PSI, https://www.bgr.bund.de/DE/Themen/GG_Fernerkundung/Downloads/Nutzungshinweise-BBD_PSI-Daten.pdf?__blob=publicationFile&v=2. (Last access: 11/2024). Cigna, F.; Bateson, L.B.; Jordan, C.J.; Dashwood, C. (2014). Simulating SAR geometric distortions and predicting Persistent Scatterer densities for ERS-1/2 and ENVISAT C-band SAR and InSAR applications. Nationwide feasibility assessment to monitor the landmass of Great Britain with SAR imagery. Remote Sensing of Environment, 152, 441-446. Ferretti, A.; Prati, C.; Rocca, F. (2001): Permanent Scatterers in SAR interferometry. IEEE Transactions on Geoscience and Remote Sensing 39, 1, 8-20. Notti, D.; Meisina, C.; Zucca, F.; Colombo, A. (2011). Models to Predict Persistent Scatterers Data Distribution and Their Capacity to Register Movement Along the Slope. Fringe 2011 Workshop, 19-23. Available online: https://earth.esa.int/eogateway/documents/20142/37627/Models_predict_persistent_scatterers_data_distribution.pdf
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High-Resolution Insights into Extreme Drought Impacts on Vegetation using Sentinel-2

Authors: Claire Robin, Vitus Benson, Dr. Marc Rußwurm, Dr Nuno Carvalhais, Prof Markus Reichstein
Affiliations: Max Planck Institute For Biogeochemistry, Wageningen University
Understanding the ecological impacts of drought is essential for safeguarding ecosystems and mitigating climate change. Leveraging Sentinel-2’s unprecedented 20-meter spatial resolution, we introduce an innovative approach to quantify the effects of climate extremes on vegetation with a new level of detail. High-resolution mapping of the impact of extreme events, such as droughts and heatwaves, can reveal how vegetation heterogeneity and local variability modulate their effects, providing crucial insights into their impact on vegetation health and the carbon cycle. However, identifying extremes requires decades of data, posing a challenge for current high-resolution satellite missions. Sentinel-2 offers only seven years of data, while Landsat's low temporal resolution limits its suitability for studying vegetation dynamics. Although these datasets deliver unprecedented spatial detail, their limited historical coverage presents significant challenges for analyzing extreme events. To address these data limitations, we adopt a sampling strategy tailored for Sentinel-2 data to extend the regional extremes method [1]. This method leverages ecosystem similarities to determine extreme thresholds across ecoregions, providing a robust alternative to traditional location-specific approaches. Doing so avoids the inherent biases of local thresholds that are computed independently for every location. Location-specific threshold approaches often yield a uniform distribution of extremes, especially problematic with short time series, where many locations may not have experienced abnormal climate impacts or vegetation responses. By utilizing larger sample sizes within eco-regions—delineated through principal component analysis (PCA) of the mean seasonal cycle—our method ensures more reliable threshold estimation and enables the mapping of extremes at Sentinel-2 spatial resolution. We demonstrate the computational efficiency of our method using the DeepExtremesCubes dataset[2], which includes samples from both within and outside climatic extremes. Validation against low-resolution MODIS data in areas with uniform landscapes—where MODIS and Sentinel-2 data are comparable—demonstrates that our method provides more reliable quantile threshold estimates than traditional location-specific approaches, supporting its effectiveness for high-resolution assessments of vegetation impacts. By leveraging Sentinel-2’s 20-meter resolution, we reveal the spatial heterogeneity of vegetation responses to climate extremes, overcoming the limitations of spatial averaging inherent in MODIS data. This finer resolution uncovers localized variations in vegetation dynamics that were previously masked, offering unprecedented insights into ecological extremes. This approach significantly advances our ability to analyze ecosystem dynamics under climate extremes, unlocking new opportunities for fine-scale ecological monitoring. [1] Mahecha, M. D., Gans, F., Sippel, S., Donges, J. F., Kaminski, T., Metzger, S., ... & Zscheischler, J. (2017). Detecting impacts of extreme events with ecological in situ monitoring networks. Biogeosciences, 14(18), 4255-4277. [2] Ji, C., Fincke, T., Benson, V., Camps-Valls, G., Fernandez-Torres, M. A., Gans, F., ... & Mahecha, M. D. (2024). DeepExtremeCubes: Integrating Earth system spatio-temporal data for impact assessment of climate extremes. arXiv preprint arXiv:2406.18179.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards a Resilient Future: CENTAUR’s Integrated Approach to Climate-Security and Early-Warning Systems

Authors: Valerio Botteghelli, Marco Corsi, Simone Tilia, Adriano Benedetti Michelangeli
Affiliations: e-GEOS, https://centaur-horizon.eu/
Climate change is, to an increasing degree, acknowledged, as a threat multiplier, exacerbating existing vulnerabilities and amplifying risks to human security. Environmental hazards such as floods, droughts, and storms not only bring disastrous impacts on ecosystems but also contribute to political instability, economic disruptions, and social unrest. Addressing these compounded challenges is critical for global peace and security, and requires a comprehensive approach integrating climate science, geospatial intelligence, and early-warning systems. Launched in 2022, the CENTAUR project is an initiative funded by the European Commission in efforts towards reducing the increasing risks posed by climate extremes by enhancing situational awareness and preparedness for climate-related security crises. Through the development of innovative indicators and data-driven tools, CENTAUR aims to improve anticipatory capabilities, support informed decision-making, and enhance resilience to climate-induced security threats. The project focuses on two key domains: urban flood risks and water/food insecurity, both of which are critical drivers of conflict, displacement, and humanitarian crises. CENTAUR employs a multidisciplinary approach, combining satellite-based Earth observation (EO) data with socio-economic indicators to assess the impact of climate extremes on vulnerable populations and critical infrastructure. Within the urban flood domain, the project has developed 11 innovative indicators that enhance flood risk assessment by integrating high-resolution meteorological, hydrological, and socio-economic data. These indicators provide early warnings of impending flood events, evaluate their potential impacts on urban areas, and support long-term recovery and resilience forecasting. Moreover, the project includes the integration of geo-referenced media data, social media analysis, and high-end flood modeling techniques like InSAR and highly resolute DTM to deliver accurate flood extent predictions and damage assessments. In the domain of water and food insecurity, CENTAUR has elaborated a set of 22 indicators that measure the interconnectedness between resource scarcity and political instability. These indicators model the potential for conflict, displacement, and instability, providing early warnings of emerging crises linked to water and food insecurity. By combining meteorological and agricultural data with socio-economic vulnerabilities, CENTAUR’s tools allow decision-makers to identify high-risk areas, assess the likelihood of climate-induced political unrest, and take preventive actions. The CENTAUR initiative follows a user-driven approach, with active participation from a wide array of stakeholders, including United Nations agencies, NGOs, and EU civil protection authorities. Feedback from the end-users, collected through targeted workshops, is being integrated into the project to ensure the relevance and effectiveness services being developed. CENTAUR is testing its indicators both in historical “cold cases”, well document past crisis events, and real-time “hot cases,” thus continuous monitoring of crisis situations and the fine-tuning of early warning systems. 8 are the use cases selected for the testing of the CENTAUR platform and its early-warning services, covering a diverse range of geographic regions of varying relevant focus. The use cases, address critical problems at the nexus of urban floods risk-food-and-water insecurity, political stability, and humanitarian crises. CENTAUR contributes to enhancing resilience and stability in regions vulnerable to crises induced by climate change through its integration of climate and security data. The project provides a pre-operational platform for monitoring, forecasting, and responding to climate-security risks, with the intent of building a robust, scalable framework for future applications in crisis management and humanitarian intervention. By linking scientific research, operational tools, and policy-driven solutions, CENTAUR represents a significant milestone toward understanding the complex nexus of climate change and security. Its innovative approach provides an all-encompassing, multi-dimensional understanding of climate-related threats, thus offering timely and data-driven responses to protect vulnerable populations and critical infrastructures from the changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New Developments in the Monitoring of Spruce Bark Beetle Infestations with Copernicus Sentinel Data

Authors: Simon König, Dr. Michael Förster, Prof. Dr. Paul Magdon, Dr. Frank Thonfeld, Prof. Dr. Marco Heurich
Affiliations: Faculty of Environment and National Resources, University Of Freiburg, Bavarian Forest National Park, German Space Agency of the German Aerospace Center, Geoinformation in Environmental Planning Lab, Technical University of Berlin, Faculty of Resource Management, University of Applied Sciences and Arts (HAWK) Hildesheim/Holzminden/Göttingen, German Remote Sensing Data Center of the German Aerospace Center, Department of Forestry and Wildlife Management, Inland Norway University of Applied Sciences
Under the influence of climate change, significant changes in forest disturbance regimes have been observed globally, with disturbances becoming more frequent and severe. This includes abiotic disturbances like windthrow and forest fires, as well as biotic disturbances such as pathogens and insects. Bark beetle infestations, in particular, have caused large-scale forest die-offs worldwide. In Central Europe, major drought events have triggered mass outbreaks of the European Spruce Bark Beetle (Ips typographus), leading to unprecedented levels of Norway Spruce mortality, especially in Germany, Austria, Czechia and Slovakia. Early detection of infestations is crucial for management actions, as infested trees have to be removed within a few weeks before a new generation of beetles emerges. Earth observation offers great potential for monitoring bark beetle infestations efficiently. Historically, Landsat has been most commonly used in this regard. Copernicus data however, especially from Sentinel-2 has emerged as a very valuable asset in this regard, combining a suitable spatial, temporal, and spectral resolution to effectively detect and monitor infestations. Various studies have shown that Sentinel-2 data is well able to capture infestation dynamics and can detect infested areas reliably. However, despite some progress being made, the early detection of trees (early enough to successfully remove them before the next generation of beetles has developed) remains a challenge. Multiple studies have used Sentinel-1 with its free and open SAR data in unmatched spatial resolution as well, albeit detecting bark beetle infestations with Sentinel-1 is more challenging due to its C-Band imagery. In this presentation, we discuss multiple innovations in the monitoring of bark beetle infestations via Copernicus data. All these innovations were tested in the Bavarian Forest National Park, a protected forest area in Southeastern Germany. The park administration has been collecting spatially explicit data on bark beetle infestations since 1988 and is Germany’s most covered forest remote sensing site. We built a consistent data cube of all available Sentinel-1, Sentinel-2, and Landsat data for the national park using the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) software, based on which we conducted our analyses. First, we tested whether a combination of the time series of all three of these sensors benefits the detection of infestations. Based on the reference data, we computed infestation probabilities from the time series using Bayesian conditional probabilities as well as random forests. Next, we tested the spatial and temporal accuracy of five different sensor configurations in the detection of infestations: • Landsat only, • Sentinel-2 only, • Sentinel-1 only, • Landsat/Sentinel-2 combined, • all sensors combined. Our results show that a combination of sensors yields no benefits for the detectability of infestations. Sentinel-2 only achieved the highest spatial accuracy (0.93) as well as the best detection timeliness. Landsat only and Landsat/Sentinel-2 combined achieved good results as well, but did not improve over Sentinel-2 only. Both configurations involving Sentinel-1 achieved inferior results. Second, since Sentinel-2 proved to be the most suitable sensor in this comparative study, we tested further improvements in the detection of infestations based on this sensor. Multiple studies have shown the particular suitability of two areas of the electromagnetic spectrum: the Red Edge (RE) and Shortwave Infrared (SWIR) range. Yet, a vegetation index that combines imagery from these two spectral ranges has not been proposed. We tested all possible combinations of Sentinel-2’s three RE and two SWIR bands via a normalized difference (NDVI-like) index. The combination of its 2nd RE and 1st SWIR bands emerged as the most suitable index, which we called NDRESW. We used a non-parametric, self-calibrated detection procedure and compared the NDRESW to three vegetation indices commonly applied in the detection of infestations. It showed the highest sensitivity to infestations (despite relatively high commission errors) as well as the timeliest detections. While only few infestations could be detected in the first infestation stages, the NDRESW delivered reliable, early detections of infestations (> 50 % of detections occurred within the first three months after the infestation onset). Lastly, based on these results, we assessed which environmental factors affect the detectability of infestations. For this task, we extended the data cube described above with multiple additional datasets, including ALS-derived forest structure metrics, metrological data, and spatially explicit metrics of the infestation intensity. Our results indicate that variables that are connected to the interplay between bark beetles and spruce trees in the surrounding of an area are the most important ones for the detectability. This is because they relate to the size of the infested patch, and the larger patches are, the lower the probability of mixed pixels. A reliable, consistent and ideally early detection of bark beetle infestations is important for forest managers, scientists, conservationists and public administration employees. In the context of the ever-increasing usage of Copernicus data in the monitoring of temperate forests, our results show that Sentinel-2 is very well able to capture the dynamics of bark beetle infestations. Yet, further research is key to extend our results to larger areas and further improvements should be explored. Future sensors, e.g. ESA’s CHIME satellite, will further improve the monitoring of bark beetle infestations from space, especially their early detection, which remains a key challenge.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating EO and OSINT for Enhanced Conflict Analysis in Fragile Settings in Sub-Saharan Africa

Authors: Dr. Theophilos Valsamidis, Dr. George Benekos, Mr Alexandros Voukenas, Mr Konstantinos Pilaftsis, Alix Leboulanger
Affiliations: Planetek Hellas, Janes
The joint utilization of Earth Observation (EO) and Open-Source Intelligence (OSINT) offers an innovative approach to understanding fragility in conflict-prone regions. Under ESA's awarded contracts in the framework of its EOLAW and EO4SECURITY activities, innovative methodologies to assess the Security State of Fragility in Sub-Saharan Africa, focusing on Southern Somalia and Northern Mozambique were developed and applied. This research explores the integration of EO-derived data, GIS analysis, and OSINT to provide actionable insights for international stakeholders. The OSINT methods applied, encompass the identification of conflict actors, the classification of conflict events, and the retrieval of critical information such as photographic evidence and testimonials. Concurrently, EO/GIS techniques enable precise geolocation of conflict events using change detection, assessment of rural environments in terms of accessibility and actors’ movement patterns using multi criteria analysis, and appropriate visualization techniques to reveal spatial concentrations of conflict activity. Key findings demonstrate the effectiveness of a workflow where EO/GIS analysis is guided by the OSINT findings. This approach enhances the understanding of conflict landscapes, and more specifically: (a) the profiles and interactions of involved actors, (b) areas of intense activity, and (c) environmental determinants of conflicts. Depending on data availability, the analyses provided either high-level regional insights or detailed geospatial intelligence at localized scales. These results highlight the potential for innovative EO and OSINT integration to support intergovernmental organizations, such as Interpol, in addressing fragility. The outcomes underline the value of this approach in improving risk assessments at regional, national, and subnational levels, paving the way for broader adoption of EO in fragile settings analysis.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Change detection using SAR tomography

Authors: Yuki Yamaguchi
Affiliations: NEC Corporation
In this study, we propose a change detection method by generating background images suitable for each observation using Bayesian Synthetic Aperture Rader (SAR) tomography and comparing them with observation images probabilistically. In change detection using SAR images, it is difficult to define an image representing unchanged state, i.e., a “background image”, due to orbital variation for each observation or the effect of random noise. The proposed method can generate background images adjusted to orbital positions for each observation using 3-dimensional information of the unchanged state reconstructed by Bayesian SAR tomography. The Bayesian approach allows us to evaluate the effect of random noise on each observation probabilistically. The method makes it possible to assess post-disaster damage rapidly and accurately. Specifically, by reconstructing 3-dimensional information of the unchanged state using time-series SAR images before the disaster and then applying the method to an image observed immediately after the disaster, rapid and accurate change detection is possible. SAR is a sensing technology that is expected to be used for various applications. In SAR, microwaves are irradiated, and amplitude and phase signals are obtained at high resolution (e.g., 0.5–3 m). SAR can observe a large area at once, day and night, and in all weather conditions. Taking advantage of these strengths, SAR has been widely used for terrestrial monitoring, such as post-disaster damage assessment and urban development. SAR images are complex images, and we can realize sensitive change detection by comparing them using phase information as well as amplitude information. Coherence change detection (CCD) is a typical method for comparing complex images. CCD detects changes between a SAR image pair observing the same area at different times by evaluating the similarity of amplitude and phase of the local regions. CCD can detect tinier changes than amplitude-based change detection techniques because it considers the similarity of phase as well as amplitude. However, it is intrinsically difficult to generate a "background image" representing the unchanged state suitable for the analysis, including CCD. This is because an orbit of a SAR satellite varies slightly for each observation, and consequently, the obtained SAR images look different even if the no change has occurred. While, spectral filtering can be applied to mitigate spectral and Doppler decorrelation, that is not the case for volumetric scattering occurring in urban as well as vegetated areas. In addition, in the low signal-to-noise region of a SAR image, the random noise's effect becomes significant, making SAR images different for each observation. Therefore, when comparing SAR images observed at different times, such as CCD, it is impossible to distinguish whether the detected changes occurred actually or are false positives due to orbital variation or the effects of random noise. Therefore, it is necessary to generate background images with orbit positions adjusted for each observation and compare them with the observed images while considering the effect of random noise. This study proposes change detection robust to orbit variation using Bayesian SAR tomography. The proposed method generates suitable background images for each observation using Bayesian SAR tomography and compares them with observed SAR images probabilistically to detect change. The proposed method can generate background images adjusted to orbit positions for each observation using 3-dimensional information of the unchanged state reconstructed by Bayesian SAR tomography. The Bayesian approach allows us to evaluate the effect of random noise on each observation probabilistically. In the proposed method, first, we reconstruct 3-dimensional information of the unchanged state by Bayesian SAR tomography. SAR tomography is the method of reconstructing the complex amplitude distribution of scatterers along the elevation direction using multi-baseline and multi-temporal SAR images. Here, we adopt SAR tomography using Sparse Bayesian Learning to reconstruct 3-dimensional information with the posterior distribution. ext, the proposed method calculates a background image adjusted to the orbit position of the i-th image, for which the change is to be detected, as a predictive distribution based on the results of Bayesian SAR tomography. This study assumes the predictive distribution as the complex circular Gaussian and calculates the distribution's average y(x,i) and standard deviation σ ̂(x,i) at each azimuth-range position x: y(x,i)=α_bg (x)^T r(x,i) σ ̂(x,i)^2=σ(x)^2+r(x,i)^H Σ(x)r(x,i) where α_bg (x) is the complex amplitude distribution in the elevation direction estimated by Bayesian SAR tomography, r(x,i) is the i-th row of the steering matrix, σ(x) is the noise accuracy parameter, and Σ(x) is the variance-covariance matrix in the posterior probability of the 3-dimensional information. Because r(x,i) reflects the orbit position of i-th image, y(x,i) represents the inferred signal adjusted to the orbit position of the i-th image. Therefore, the proposed method can suppress false positives originating from the orbital variation and random noise by comparing observed signal g(x,i) with the inferred signal y(x,i) while considering the standard deviation σ ̂(x,i). The proposed method detects changes by evaluating the probability that the observed signal is observed under the condition of the predictive distribution, i.e., the posterior probability p(g(x,i)|y(x,i),σ ̂(x,i)). This study evaluates the exponential part of p(g(x,i)|y(x,i),σ ̂(x,i)). The smaller the value of the exponential part of p(g(x,i)|y(x,i),σ ̂(x,i)), the more significant the discrepancy between the observed and inferred signal, which means that changes occur. In this study, we apply the proposed method to real SAR data to evaluate its performance. This study uses 55 TerraSAR-X images that observed Haneda Airport from June 8, 2011, to February 11, 2014. Here we focus on the maintenance area of the airport, which is considered to have heavy vehicles and aircraft traffic. First, this study compares the proposed method with the conventional CCD. In the conventional CCD, coherence values are calculated between adjacent time-series SAR image pairs using the 3×3 boxcar filter. The result clearly shows that the proposed method successfully detects the appearance of vehicles and aircraft even in areas where it is difficult to detect changes by the conventional CCD due to the false decrease of coherence values caused by differences between images. Next, this study compares the proposed method with a change detection method that uses only amplitude information. The method using only amplitude information detects changes by evaluating the distance between the observed amplitude and the average amplitude in the direction of the time series. The result shows that the proposed method can suppress false positives better than the method using only amplitude information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An Operational Emergency Flood Mapping System in Scotland Using SAR Data

Authors: Dr Morgan Simpson, Dr Cristian Silva-Perez, Dr Armando Marino, Professor Peter Hunter
Affiliations: University Of Stirling, Keen AI
Increases in flood events have been attributed to land-use change, climate change and watershed management changes. Extreme rainfall events are expected to increase in both intensity and frequency with climate change. Flooding causes damage to infrastructure, such as roads, railways, buildings and agricultural land, as well as posing danger to ecosystems and human lives through the event itself or through the transfer of biological and industrial waste. One-third of annual natural disasters are flood events, with more than half of all victims to natural disasters being flood-related. With this, a focus on flooding response is needed for future management of these events. Scotland is a country that is prone to high rainfall and flood events. Annual flood damage in Scotland is approximately £252 million between 2016 – 2021. Scotland’s Environment Protection Agency (SEPA) have proposed a mixture of flood risk mitigation strategies, including: awareness raising, flood forecasting, maintenance, planning policies and emergency plans / response (SEPA, 2015). Flood detection using Synthetic Aperture Radar (SAR) remote sensing has received substantial attention in scientific literature. And more recently, the use of machine learning and artificial intelligence has also been implemented to aid the classification and mapping of flood events. Here, we focus on the new SEPAs' Satellite Emergency Mapping System (SEMS), which uses state-of-the-art satellite imaging technology to deliver real-time, high-resolution data and insights that enhance decision-making capabilities and enable faster, more efficient response efforts when disaster strikes, offering a significant boost to Scotland's resilience against disasters. SEMS forms part of the International Charter Space and Major Disasters, a global network of over 270 satellites from 17 Charter members around the world, working to support disaster relief. SEPA are the only organisation in Scotland able to activate the charter and give emergency responders access to critical satellite imagery (here, focussing on Sentinel-1, TerraSAR-X and RADARSAT-2 data). SEMS operates 365 days a year with an on-call provision available 24 hours a day. The location of use is primarily focussed on Central Scotland, with specific focus on the Forth Catchment. Approximately 25% of Scotlands population is situated within the catchment, which spans area covering 3000km2. The catchment land-use is dominated by rural usage, notably manged forests and farmland. However, large degrees of urbanisation are found within Stirling and surrounding villages. While the catchment is primarily within the central lowlands, hill ranges such as the Ochil Hills and Lomond Hills are found surrounding the Forth Valley. While SEMS launched in September 2024, here, we show flood maps created from two previous charter activations from November 2022 and October 2023. The flood maps are created from utilising deep learning U-NET Convolutional Neural Network (CNN) techniques for flood mapping on Sentinel-1 imagery as well as rapid thresholding techniques. A focus is shown on the system itself and the flood maps generated during this period.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The use of Multi-temporal Interferometry to monitor pre-failure ground displacement

Authors: Mariana Ormeche, Ana Paula Falcão, Rui Carrilho Gomes
Affiliations: Instituto Superior Técnico, Universidade de Lisboa
The term landslide is used for any kind of slope movement, of either rock, earth or debris masses that impacted approximately 6.7 million people and caused over 18 000 deaths, resulting in 6.9 billion dollar damage between 2000 and 2024. Current engineering practices for landslide risk assessment aim to determine a slope safety factor using the ration between resisting and destabilizing stresses, or indirect stability indicators such as rainfall thresholds, which may be prone to false alarms. The stability conditions of a slope can however be directly linked to its kinematics (e.g. displacement and velocity). In fact, though its lifetime, a landslide goes through a sequence of three deformation stages: the initial deformation stage, the uniform deformation stage and the accelerating deformation stage. The accelerating deformation stage is typically the definition of an active landslide, characterized by exponentially increasing displacement and velocity curves. The outcomes of this stage are either collapse or the reach of a new equilibrium. With Multi-temporal Interferometry (MTI), millimeter scale ground displacement time-series over large areas can be retrieved. Thus, MTI using spaceborne SAR over landslide prone areas can prove to be an effective cost-benefit technique for landslide risk assessment. To test the advantages and limitations of MTI to monitor pre-failure landslide displacement, a corner reflector (CR) was used to simulate the displacement of the accelerating deformation stage of a slope. Over the course of two months, exponentially increasing displacements were applied to the CR to test the ability of Sentinel-1 to capture this deformation pattern, therefor test is ability to be used in landslide risk assessment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Temporal disaggregation of high-resolution building footprint data using Sentinel 2

Authors: Manuel Huber, Prof. Dr. Christian Geiß, Prof. Dr. Hannes
Affiliations: German Aerospace Institute
Identifying building footprints is essential for understanding urbanization and its impact on the environment. These footprints are, for example, used to assess urban structures, surface sealing, and they are used as proxy for population assessments. Furthermore, they are related to effects of the urban heat island, air quality, or green space distribution, helping shape urban planning strategies. To understand the development as well environmental implications of urbanization, especially in fast expanding regions, demands timely and accurate building data. This is also critical for evaluating, for example, risk from natural disasters such as floods and earthquakes. Traditional building footprint extraction methods rely heavily on high resolution imagery and complex machine learning models. Google’s recent multitemporal dataset covering the Global South is a substantial effort in this domain but has notable limitations. It depends on costly and proprietary high-resolution imagery and uses image stacks from Sentinel-2, which hinders fast response applications and performs poorly in cloud-covered regions. Furthermore, problems related to domain adaptation, for example aiming for global generalization, can lead to inaccuracies due to the diverse building characteristics across different regions. Our approach addresses these challenges by developing a fully open-source, end-to-end solution using Sentinel-2 imagery and open source high-resolution building footprint data. The developed method involves training a locally adapted probabilistic MIMO U-Net model, as U-Nets are reliable and an excessively researched architecture for image segmentation. Additionally, we implement weighted masks in the trainings process to enhance performance across diverse urban areas, especially as we aim to vectorize the segmentation outputs. To further optimize the trainings process and data acquisition an urban density map was created and utilized to select a diverse training dataset around the region of interest. In this process we select representative training data that cover both densely built and sparsely populated regions. This localized training process improves the model’s ability to adapt to regional variations in building styles, overcoming the shortcomings of generalized approaches. We also integrate an uncertainty estimation layer that captures both aleatoric (data-related) and epistemic (model-related) uncertainties. These uncertainties, are computed as outputs of the MIMO U-Net model and can directly be used to set confidence thresholds for predictions, making the model outputs more transparent for high-stakes applications such as disaster risk management. In conclusion, our proposed method represents a scalable, adaptable and open-source solution for building footprint extraction. By utilizing open-source data and tools, our workflow is scalable, cost-effective, and accessible to a broad range of users, including researchers, urban planners, and disaster response teams. This aligns with the need for transparent, reproducible methods in geospatial analysis, particularly in developing regions where resources for data acquisition and high resolution in-situ data are limited. The iterative update process further ensures that the building footprint data remain accurate and up-to-date, supporting dynamic urban monitoring and timely decision-making. Our presentation at the Living Planet Symposium will cover the full methodology, showcase case studies, and discuss the practical applications of our approach in urban and environmental risk assessments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Innovative multicriteria approach for flood risk assessment: A case study in Garyllis river basin, Cyprus.

Authors: Ms Josefina Kountouri, Dr CONSTANTINOS F. PANAGIOTOU, Mrs Alexia Tsouni, Mrs Stavroula Sigourou, Mrs Vasiliki Pagana, Dr Christodoulos Mettas, Dr Evagoras Evagorou, Dr Charalampos (Haris) Kontoes, Professor Diofantos Hadjimitsis
Affiliations: Eratosthenes Centre Of Exellence, National Observatory of Athens (NOA), ), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), Operational Unit “BEYOND Centre of Earth Obser-vation Research and Satellite Remote Sensing
Floods are the most frequent and most costly natural hazards at the European scale. Therefore, policymakers and water planners urge reliable information to design and implement effective flood management plans that cover the four major components of dis-aster risk reduction, particularly preparedness, response, recovery and mitigation. The proper adaptation of these components in the management plans is especially important in river networks that intersect urban units since these networks are highly prone to flash floods. As part of the collaborative activities between the ERATOSTHENES Centre of Excellence (ECoE) and BE-YOND/IAASARS/NOA, an innovative multicriteria approach is proposed to assess the spatiotemporal evolution of flood risk levels in the Garyllis River basin, which is located in the southern part of the island of Cyprus. Data have been collected from multiple sources, including satellite missions, governmental portals, in situ measurements, and historical records, at different resolutions. For example, a digital elevation model (DEM) with a 5 m resolution was provided by the Department of Land and Surveys of Cyprus, the land use/land cover map of the study area was extracted from Copernicus Land Monitoring Services, whereas daily precipitation data were obtained from nearby ground-based rainfall stations. The collected data have been calibrated via onsite visits and dis-cussions with relevant actors, harmonized in terms of spatial and temporal resolution and used as inputs to estimate the evolution of surface run-off (HEC-HMS), together with hydraulic simulations (HEC-RAS 2D) to estimate the flow depth for different return periods. The vulnerability levels of the study area are quantified via the weighted linear combination of relevant factors, particularly pop-ulation age, population density and building properties, according to the latest official governmental reports. In addition, the ex-posure levels were quantified in terms of the land value. For each flood component, all factors are assigned equal weighting coef-ficients. Consequently, flood risk levels are evaluated at each location as a product of hazard, vulnerability and exposure levels. The validity of the proposed methodology is evaluated by comparing the critical points that were identified during the field visits with the estimations of the flood risk levels. Consequently, escape routes and refuge regions were recommended for the worst-case scenario. Overall, this study is expected to help water authorities further align with the EU Floods Directive 2007/60/EC, support social awareness regarding the actions that need to be taken, and recommend appropriate mitigation measures.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The FLOWS Project – Improving Flood Crisis Management Through Earth Observation Solutions

Authors: Benjamin Palmaerts, Andrés Camero Unzueta, Sébastien Dujardin, Rink W. Kruk, Dr Lisa Landuyt, Sam Leroux, Sandro Martinis, Pieter Simoens, Eric Hallot
Affiliations: Remote Sensing and Geodata Unit, Scientific Institute of Public Service (ISSeP), Earth Observation Center, German Aerospace Center (DLR), Department of Geography, University of Namur, Belgium National Geographic Institute (NGI-IGN), Remote Sensing, Flemish Institute for Technological Research (VITO), IDLab, Department of Information Technology, Ghent University
Europe is increasingly experiencing devastating floods, highlighting the severe impacts of climate change and exposing vulnerabilities in land use and population safety. Earth Observation (EO) technologies offer significant potential for flood crisis management, but the catastrophic floods in Belgium and Germany in July 2021 revealed gaps in the effective use of EO data, largely due to a lack of methods adapted to the needs of crisis managers and insufficient awareness of available tools. The FLOWS project addresses these challenges by determining how and when EO data and derived products can optimally support flood crisis management across three phases: crisis response, aftermath, and reconstruction. The project builds on the experiences of crisis managers during the 2021 floods and a comprehensive analysis of EO data acquired during the event. Using a problem-tree approach, the project identifies geospatial challenges faced by first responders and stakeholders, including crisis centers, emergency services, authorities, municipalities, and water managers. These stakeholders are actively involved in the process, providing input to guide development and validation through Agile prototyping. Key innovations include methodologies to enhance situational awareness, such as leveraging multi-sensor EO data through optimized algorithms for UAVs, commercial SAR, and Sentinel-1/2 data. Real-time computer vision pipelines enable adaptive UAV flight adjustments and efficient onboard processing of RGB imagery, ensuring rapid detection of flooded areas and victims, and providing first responders with essential data. Social media and mobile phone data are integrated into GIS-based solutions to map population dynamics at fine spatial and temporal scales. Additionally, deep learning methods are applied to assess flood-induced damage across diverse environments, including urban and rural areas. Using very high-resolution (VHR) and Sentinel data, the project develops automated tools to classify and map impacts on buildings, transportation networks, vegetation, and riverbanks, supporting both immediate response and long-term recovery planning. Finally, heterogeneous data sources are integrated into disaster hot spot maps using probabilistic fusion techniques. These maps provide an up-to-date situational overview of the most affected areas, enabling crisis managers to prioritize response efforts and allocate resources efficiently. By integrating EO methodologies and engaging key stakeholders, FLOWS aims to advance flood preparedness and crisis management, ultimately supporting flood-affected populations and fostering resilient communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Human-Caused Wildfire Ignition Risk Modelling - a Comparison of Different Regions in Europe, Using Remote Sensing and Geodata

Authors: Julian Schöne, Prof. Dr. Christian Geiß, Dr. Michael Nolde, Moritz Rösch, Dr. David
Affiliations: DLR e.V., University of Bonn, University of the United Nations, Institute for Environment and Human Security
Wildfires are of increasing global concern due to their devastating effects and their role in climate change through positive feedback loops from greenhouse gas emissions. While between 95-97 % of wildfires in Europe are triggered by human-related factors, to date, cross-regional fire prediction research has mostly focused on weather data and fuel characteristics. The presented work addresses this gap by proposing a predictive spatial modelling approach for human-caused fire ignition risk and by analysing the contributing factors. By using openly available geospatial and remote sensing data, the built random forest models are transferable to any study area in Europe when trained with local data. In this study, the models were applied to and tested in four distinct study areas: northern Portugal, north-western Spain, the Athens metropolitan region and its surroundings (Greece), and Brandenburg, Germany. For each study area, eight models were trained and evaluated, incorporating different combinations of up to seven explanatory variables that try to depict human-caused fire ignition. These variables are mainly related to human presence and activities in wildland areas, primarily measured by the distance to different kinds of infrastructure. In each region’s best performing model, which throughout is the model including all seven variables, distance to forested and to agricultural areas were included. The results revealed substantial regional variations in performance, with exceptional performance in Brandenburg (F1-score: 0.97), high accuracy in Greece (F1-score: 0.86) and moderate performances in Spain and Portugal (F1-scores: 0.65 & 0.59). The predominant variables contributing to human-caused ignition risk in the Mediterranean regions are distance to railways and the wildland-urban interface. In Brandenburg, distance to footpaths was predicted to be the primary factor. Interestingly, military training areas showed a strong spatial correlation with fire ignitions, although they were not included as a variable in the analysis. When used in conjunction with dynamic live fuel and weather maps, the results can provide policymakers and stakeholders with valuable tools for implementing targeted localized fire risk reduction measures and optimizing resource allocation for fire management. The transferability of the methodology and the identification of region-specific risk factors can help to develop locally tailored fire prevention strategies across Europe. Keywords: human-caused wildfire ignition – machine learning – random forest – ignition risk prediction – disaster prevention
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of Different Synthetic Aperture Radar (SAR) Systems for Mapping Floating Pumice Rafts After Submarine Volcanic Eruptions

Authors: Dr Simon Plank, Dr Melanie Brandmeier, Marco Lutz
Affiliations: Technische Hochschule Würzburg-Schweinfurt, German Remote Sensing Data Center, German Aerospace Center (DLR)
Floating pumice rafts generated by submarine volcanic eruptions pose significant risks to maritime activities, fisheries, tourism, and coastal populations. Tracking these rafts is crucial to mitigate their potentially wide-ranging impacts. While previous approaches have primarily relied on optical satellite data, their effectiveness is often limited by cloudy conditions. In this study, we investigated the potential of cloud-penetrating Synthetic Aperture Radar (SAR), a method not previously evaluated for this purpose. We processed and analyzed data from three SAR systems: TerraSAR-X operating in the X-band, Sentinel-1 in the C-band, and ALOS-2 in the L-band. For Sentinel-1 data, both amplitude information and results from a polarimetric decomposition based on the complex SAR data were examined. In contrast, for TerraSAR-X and ALOS-2, only amplitude data were evaluated. The results demonstrate that the polarimetric properties of the pumice rafts do not provide any advantage over amplitude data for mapping purposes. Consequently, polarimetric decomposition does not offer a sufficient basis for developing an automated approach to pumice raft tracking. Within the amplitude data, the co-polarized (co-pol) channel proved more suitable for mapping than the cross-polarized (cross-pol) channel. This is due to the higher contrast between the pumice rafts and the surrounding water and the reduced influence of noise in the co-pol channel across all investigated SAR bands. However, a limitation of SAR emerged in scenarios where pumice rafts were partially or fully submerged after a longer period following the eruption. This indicates that SAR data are not well-suited for the long-term tracking of pumice rafts. Instead, SAR is particularly valuable for identifying and manually tracking floating pumice during eruption events and under cloudy conditions, where visibility for optical sensors is significantly reduced.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Hybrid Deep Learning for Oil Spill Mapping: Leveraging Sentinel-2 and Foundation Models

Authors: Christos GE Anagnostopoulos, Konstantinos Vlachos, Anastasia Moumtzidou, Dr. Ilias Gialampoukidis, Dr. Stefanos Vrochidis, Dr. Ioannis Kompatsiaris
Affiliations: Centre for Research & Technology Hellas
Oil spills pose significant environmental threats, necessitating efficient and accurate detection methodologies. Optical satellite imagery from missions like Sentinel-2 can potentially provide essential data for monitoring these occurrences, on top of traditional approaches such as SAR; yet challenges emerge due to variations in water characteristics, atmospheric interference, and the complex spectral signatures of different oil types, and sunglint effect, among others. Traditional detection approaches frequently depend on models trained on limited datasets, which may lack generalizability and robustness across varied diverse environmental conditions. This study explores the enhancement of oil spill mapping with Sentinel-2 data through the development of a hybrid deep learning model and acts as a proof-of-concept for the use of Foundation Models on water applications. The Marine Debris and Oil Spill (MADOS) dataset is employed for training and evaluation of this study. MADOS is a meticulously curated benchmark consisting of 174 Sentinel-2 scenes acquired from 2015 to 2022, encompassing various environmental conditions and covering approximately 1.5 million pixels across 15 thematic classes, including oil spills. Notably, it includes a total of 2,803 patches, of which 361 patches correspond to oil spill cases (234,568 pixels), serving as critical training samples. This reflects a significant class imbalance, as the oil spill class constitutes a small fraction of the dataset. The proposed hybrid deep learning framework combines a state-of-the-art marine debris and oil spill detection model (i.e., MariNeXt) with a Sentinel-2 Foundation Model specialized for water applications (i.e., HydroFoundation). MariNeXt utilizes the SegNeXt architecture being pretrained on the MADOS dataset, and is shown to be proficient in capturing contextual characteristics pertinent to oil spills. The HydroFoundation model implements a Swin v2 Transformer encoder pretrained on a vast amount of Sentinel-2 data, adept at extracting comprehensive representations of water bodies and relevant features. In contrast to prior methodologies, the hybrid model leverages the strengths of both architectures by integrating the Swin v2 Transformer encoder with the MariNeXt decoder. This integration facilitates advanced feature extraction and effective segmentation specifically designed for oil spill detection. The model is adapted to accept the 11 spectral bands of the MADOS Sentinel-2 data by adjusting the patch embedding layer of the encoder. A progressive fine-tuning method is utilized where the decoder is frozen to preserve its specialized segmentation capabilities while the encoder adapts to the new input configuration. Subsequently, decoder layers are gradually unfrozen, allowing for joint optimization and harmonious integration between the encoder and decoder. Training and evaluating the model using the MADOS dataset, which provides a diverse set of oil spill instances and environmental conditions, enhances detection accuracy. A comparative analysis is conducted between the original MariNeXt model and the proposed hybrid model on unseen data not used during training. Various metrics are employed to evaluate detection performance, including Overall Accuracy, Precision, Recall, F1-Score, and Intersection over Union. The results provide evidence that the hybrid model shows comparable performance to the MariNeXt model, exhibiting better generalization capabilities leaving room for improvement. The issue of label imbalance inherent in the dataset, due to the relatively rare occurrence of oil spill instances compared to non-oil pixels, is addressed through data augmentation techniques. The results are promising and underscore the importance and future prospects of Foundation Models in water applications, particularly in complex problems like oil spill detection, among others. Furthermore, the approach in this study acts as a proof-of-concept and points towards the more widespread adaptation of Foundation Models to support other water-related applications, such as parameter retrieval for water quality indicators, among others.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Satellite Data to Resilient Farming Systems: Enhancing Drought Monitoring in Mozambique

Authors: Samuel Massart, Mariette Vreugdenhil, Rogerio Borguete Alves Rafael, Martin Schobben, Pavan Muguda Sanjeevamurthy, Carina Villegas-Lituma, Wagner Wolfgang
Affiliations: Technische Universität Wien
With the majority of the rural population of Mozambique relying on rain-fed agriculture, the country is vulnerable to drought events and shifts in rainfall seasonality. Water shortages have strong negative impacts on the productivity of smallholder farms and, subsequently, on food security and household income. In Africa, drought monitoring systems are commonly based on precipitation and temperature indicators. These datasets are low resolution and are predominantly based on in-situ information. Hence, accurate monitoring of soil water content is crucial to mitigate the effects of droughts and delayed rainy seasons on rural communities and vegetation ecosystems. In this context, microwave remote sensing provides accurate estimation of soil moisture in the first few centimetres of the soil, thus constituting an essential tool to support drought monitoring for early warning systems. Surface soil moisture and drought indicators support decision-makers, from politicians to small-scale farmers, to make data-driven decisions on agricultural planning, drought mitigation strategies, and ultimately increase resiliency of farming systems. First, a change detection model is applied to Sentinel-1 backscatter to model surface soil moisture over Mozambique between 2015-2023 at a 500m sampling. The modelled soil moisture is compared with state-of-the-art products, including land surface model (ERA5-Land), earth observation datasets (SMAP, ASCAT) and hybrid product (WaPOR). Moreover, the Sentinel-1 dataset is validated with in-situ stations located in five regions of Southern Mozambique. The results show that Sentinel-1 backscatter is highly sensitive to soil moisture and is a valuable tool for developing drought indices at a kilometer scale resolution. The resulting Sentinel-1 SSM product is then used as a basis for developing agricultural drought indicators, and a start-of-season product. Two drought indicators are developed based on (1) combined Sentinel-1 and ASCAT climatology and (2) soil physics using auxiliary datasets from SoilGrids. The estimation of the "start of rainy season" product is derived from a break-point detection approach applied on the stand-alone Sentinel-1 surface soil moisture product. The drought indicators are compared with precipitation (Z-score based on CHIRPS - Rainfall Estimates from Rain Gauge and Satellite Observations), and vegetation anomaly (Z-score from NDVI available on the Copernicus Land Monitoring Service). The comparison underlines the complementarity of climate, vegetation, and soil-based indicators to effectively monitor agricultural drought development. These findings and methodologies will be detailed in a forthcoming publication (Massart et al., In preparation). Finally, we highlight the limitations and challenges associated with bridging the gap between the development of earth observation products and the needs of small-scale farming systems in Mozambique. We present a case study focusing on the development of a custom data viewer designed to monitor, share, and promote the dissemination of satellite products. Observed limitations, including accessibility and technical capacity, are described, and alternative approaches are proposed to improve the adoption of new technology within traditional farming systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of the contribution of EO data to support national firefighters in activities of urgent technical rescue during disaster response, fire prevention and surveillance

Authors: Dr. Valentina Nocente, Dr. Stefano Frittelli, Dr. Rossella Simione, Dr. Deodato Tapete, Maria Virelli
Affiliations: Agenzia Spaziale Italiana (ASI), Corpo Nazionale dei Vigili del Fuoco (CNVVF)
At national level, firefighters are the public body in charge of being the first responders trained in firefighting, primarily to control and extinguish fires that threaten life and properties, as well as to rescue persons from confinement or dangerous situations. In Italy, the activities of “urgent technical rescue” and more generally of public rescue, together with those of fire prevention and fire surveillance, are guaranteed by the Ministry of the Interior – Department of Fire Brigade, Public Rescue and Civil Defense, through the operational structures of the National Fire Corps (Corpo Nazionale dei Vigili del Fuoco – CNVVF) located throughout the national territory, active 7 days a week, 24 hours a day. This organizational structure is an Italian specificity and represents a true unicum in the international panorama of fire brigades. In other States, in fact, these are mainly organized on a local basis (at municipal or, sometimes, regional level). The Italian Fire Brigade, on the other hand, constitutes a National Corps (CNVVF), which finds its institutional place within Italy’s State Administrations and, for this reason, has a unitary organization, but, at the same time, also a widespread diffusion throughout the territory, through the Regional Directorates, which coordinate the peripheral operational network of the Provincial Commands and the related detachments. CNVVF is called, first and foremost, to ensure the fundamental mission of “urgent technical rescue”. This type of interventions are characterized by urgency and immediacy of the rescue service and, as such, require highly specialized technical professionalism and suitable instrumental resources. In the case of civil protection events, CNVVF operates as a fundamental component of the National Civil Protection Service ensuring, within the scope of its technical skills, the direction of first aid interventions. Fire prevention is the other function of eminent public interest entrusted to CNVVF and includes study, experimentation, standardization and control activities, aimed at reducing the probability of a fire or limiting its consequences. Among the various data sources and assets that the CNVVF exploit to address activities of urgent technical rescue and fire prevention, Earth Observation (EO) is increasingly being used. Over the past years, a specialist expertise has been developed in processing satellite imagery and generating products, thematic maps and elaborations that can be used to inform in situ activities during emergencies. Satellite data and derived products are nowadays among the information layers of the “cloud based” cartographic portal (GeoportaleVVF) that the CNVVF cartographic office (the TAS Central Service) has developed to share geographic data relating to rescue, analyse intervention scenarios, define the operational strategy and quantify and direct resources on a geographic basis. In this wider context, since 2018, the Italian Space Agency (ASI) and CNVVF have signed and cooperated in the framework of a bilateral agreement (n. 2018-10-Q.0) to promote, at national level, the use of existing national and international EO satellite assets and related data in support to CNVVF activities – with specific regard to urgent technical rescue – and identify possible new applications for fire hazard prevention. Under this agreement, in the event of medium and large-scale emergencies, ASI makes radar-type satellite products available to the CNVVF. In particular, Synthetic Aperture Radar (SAR) data are provided by exploiting the X-band COSMO-SkyMed constellation. The mission currently consists of three operational satellites from the First Generation and two from the Second Generation, and allows for image collection at high to very spatial resolution (up to 3 and less than 1 m in the case of StripMap and Spotlight modes, respectively) and according to a regular observation scenario up to on demand data take opportunities. COSMO-SkyMed data are provided by ASI free of charge, given that the usage for CNVVF purposes falls within the institutional support and cooperation, and are delivered within very short timeframes (up to few hours) during emergencies. COSMO-SkyMed data are helpful to generate change detection products that can serve for delineation and rapid mapping, e.g. for identifying collapsed buildings due to earthquakes or other instability processes, the extent of flooded areas and zoning of the areas affected by wildfires. Furthermore, during the cooperation with ASI, CNVVF has gained some experience with processing hyperspectral data from ASI’s PRecursore IperSpettrale della Missione Applicativa (PRISMA) mission. The satellite was launched in March 2019, is based on a single small class spacecraft, flying on a frozen Sun-Synchronous Low Earth Orbit at 615 km altitude and is equipped with electro-optical devices, collecting imagery in 239 spectral bands (total VNIR-SWIR range: 400–2500 nm) at 30 m Ground Sampling Distance (GSD) over a standard image size of 30 km × 30 km, coupled with a 5-m resolution panchromatic image. These data are useful for example to generate thematic products allowing the classification of areas susceptible to fires and assessment of the affected areas. With both SAR and hyperspectral EO data, the scope is to provide an evidence base over either local or wide areas that, during emergencies, could inform decision-making in a very timely way suiting the extreme rapidity required by rescue operations and, during ordinary times, could improve methods for fire hazard assessment and preparedness. To this scope, it is worth highlighting that CNVVF combine satellite-based products with other sources of information, for example in situ inspections and drone surveys. Furthermore, while data analysis and interpretation are made by expert operators, more efforts are being put to experiment more automated routines and algorithmic solutions. The present paper aims to showcase experiences and lessons learnt on the role that EO plays in supporting CNVVF in their activities, both during crisis / emergencies – i.e. urgent technical rescue, disaster response and fire confinement – and non-crisis time – i.e. fire prevention and surveillance. With regard to crisis / emergencies, different hazard types and temporal and spatial scales are considered. Among the many events during which ASI and CNVVF cooperated, we discuss the support that COSMO-SkyMed products enabled during the following events: • Wide-area catastrophic events such as the floods that hit Emilia Romagna and Tuscany region in 2023. Especially in the first case, floods spread across huge territories and lasted for long, leaving urban environments and countryside flooded for several weeks; • Site-specific events where rapidity in the disaster response, especially for searching for people to rescue, is paramount. This is the case of earthquakes and hydro-meteorological hazards such as the mudflow and debris flow happened at Casamicciola, in Ischia Island, in November 2022, and the seismic event (Mw 6.5) hitting Norcia on 30 October 2016. In particular, the latter event is also discussed to showcase the benefits of undertaking pre-operational tests of possible new satellite-based applications. The Norcia earthquake was indeed selected as a real scenario to demonstrate how COSMO-SkyMed images could serve the purpose of detecting and mapping damaged buildings, to complement and facilitate in situ surveys. Given the strong link between timely rescue operations and the percentage of survived victims from natural disasters, CNVFF need to access accurate information in order to identify site accessibility and prioritize sectors that require inspections and people search. Taking advantage of the availability of pre-event 1-m spatial resolution COSMO-SkyMed Spotlight images covering the site of Norcia and regular observations at 3 m spatial resolution from the Map Italy project, an experiment was undertaken by comparing the results achievable from satellite data and the map that was produced by CNVFF over Norcia and Castelluccio during the emergency. The main outcomes is the clear demonstration that a SAR-based single-building damage mapping after earthquakes is feasible and leads to accurate results if SAR data with properties such as those offered by COSMO-SkyMed products are accessed (whereas global medium spatial resolution missions such as Sentinel-1 would have failed to provide the needed temporal revisit and spatial detail). Finally, with regard to fire prevention, examples will be presented from: • experimental use of PRISMA imagery on Italian case studies; • operational exploitation of optical multispectral data during recent activations that CNVFF participated in to contribute to wildfires crises in 2024, in Portugal and Greece, in the context of the European Civil Protection mechanism.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A comparative assessment of a meteorological drought indicator and soil moisture over Austria

Authors: Stefan Schlaffer, Matthias Schlögl, Klaus Haslinger, Stefan Schneider, Raphael Quast
Affiliations: GeoSphere Austria
Drought represents one of the most impactful types of hydrometeorological extreme events with adverse consequences for human and natural systems. Their frequency, intensity, and duration are expected to increase due to rising temperatures and altered precipitation patterns, thus posing significant risks to water resources, agriculture, ecosystems, and human livelihoods. Intensity, duration and impact of droughts can be monitored and characterised by deriving statistical indicators from meteorological and biophysical variables, such as precipitation, evapotranspiration, soil moisture and indicators of vegetation health. These variables, however, correspond to different characteristics and intensities of drought, such as meteorological, hydrological and agricultural drought. While soil moisture, a key indicator of water availability, provides a more direct measure of drought severity, it typically is more challenging to measure than meteorological variables. As a result, modelled time series are often used instead. Knowledge about the relationship between relatively simple indicators of meteorological drought, which can be derived from meteorological measurements, and soil water availability could help to better inform decisions, especially in complex terrain like the Austrian Alps. To this end, we compared the Standardised Precipitation-Evapotranspiration Index (SPEI) with two gridded soil moisture datasets, namely (1) the ERA5-Land reanalysis volumetric soil water content and (2) the EUMETSAF H-SAF METOP ASCAT surface soil moisture product. Furthermore, in-situ soil moisture measurements from the International Soil Moisture Network (ISMN) were used. The WINFORE SPEI product is based on interpolated daily fields of precipitation and air temperature for the territory of Austria. Reference evapotranspiration is computed using the Hargreaves formula, which is based on daily air temperature time series. Three different integration times (30, 90 and 365 days) were used for computing SPEI. All three datasets were aggregated or resampled to the 0.1° grid of the ERA5-Land reanalysis dataset. In general, correlation coefficients showed a clear spatial pattern with higher values in Eastern and Southern Austria, whereas correlation was low in the mountainous regions of Central and Western Austria. As expected, correlation coefficients were higher in case of soil moisture anomalies than in case of absolute values. Between SPEI and ERA5-Land soil moisture, Pearson correlation coefficients of 0.7 were attained in the Northeastern parts of Austria. Correlation strongly decreased when comparing short-period SPEI (30 and 90 days) with soil water content of deeper ERA5-Land soil layers, whereas SPEI computed over a longer time window (365 days) showed the highest correlation with water content in the deepest soil layer (100 to 289 cm). Between SPEI and ASCAT surface soil moisture, correlation was consistently lower, especially over mountainous and densely forested regions. Masking of observations made under frozen conditions significantly improved the achieved correlations. The results demonstrate the potential of drought indicators, such as the SPEI, to serve as a proxy for soil moisture anomalies at the intermediate spatial scale of ca. 10 km.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping Wildfire Exposure for a Transboundary Region of Central Europe

Authors: Evripidis Avouris, Christopher Marrs, Kristina Beetz, Dr. Marketa Poděbradská, Dr. Emil Cienciala, Lucie Kudláčková, Prof. Dr. Miroslav Trnka, Matthias Forkel
Affiliations: TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing, TUD Dresden University of Technology, Junior Professorship in Environmental Remote Sensing (currently employed at ICEYE, Finland), CzechGlobe – Global Change Research Institute of the Czech Academy of Sciences
Central Europe, an area that has historically been untouched by catastrophic wildfires, has recently experienced an increase in the number of major wildfire events. Equally alarming, some of these disasters are occurring in transboundary or wildland-urban interface (WUI) areas, where different administrative systems mix with natural vegetation, posing unique challenges to firefighting approaches. One such catastrophic event occurred in 2022, when a wildfire burned an unprecedented 1173 ha in the Saxon Switzerland and Bohemian Switzerland national parks on the border between the Czech Republic and Germany. This served as a warning to the scientific community and local stakeholders, demonstrating the need to adapt to this new reality. Such events underline the need to inform the public in Central Europe of the potential risk that they and their property could face from forest fires. In addition, stakeholders responsible for dealing with wildfires in Central Europe, such as firefighters, national park administrations and relevant government agencies, should be aware of the potential danger that areas under their jurisdiction are increasingly exposed to. Here we develop a methodology for creating a wildfire exposure map in a transboundary area of Central Europe to quantify and demonstrate the exposure of settlements. We created nine different wildfire scenarios based on three fire durations (1-3 days) and three levels of fire weather conditions. Fuel types for the study area were derived from the methodology proposed by Beetz et al. (2024), which involved the employment of the European fuel type classification of Aragoneses et al. (2022) as the starting point. The study area has suffered from a severe bark-beetle infestation in recent years, which has resulted in a large amount of flammable deadwood and natural regrowth, making such areas more prone to wildfires. Landsat 8 imagery was used to map those bark beetle-infested parts of the study area, for which no data could be found. This was achieved through a supervised classification algorithm, whereby a map of bark beetle infestations from ground and airborne surveys by the forest administration was used as training data. Fuel models were finally derived after a crosswalk of the fuel types to the Scott and Burgan Fire Behaviour Fuel Models (2005), which were further enhanced by in-situ fieldwork. We used the FlamMap model to calculate flame length and burn probability for each scenario. These two metrics were then combined into a bi-variate raster, one for each wildfire scenario. The final map used settlements in the area as the exposed assets in focus and was further enhanced with support capability indicators (transportation network and fire station locations). The final map was visualised as an interactive web application. The map allows the user to alternate between the scenarios, and permits the evaluation of exposure down to a building level for the settlements in focus. We then performed three evaluation analyses. Firstly, we tested the overall ability of the FlamMap model to accurately model the first three days of the 2022 wildfire by considering the specific fire weather conditions during that event. The fire perimeters of these three modelling runs were compared to active fire observations from the VIIRS sensors. There was good overlap between the two, especially for the second and third days of the fire. Secondly, in order to test the ability of the nine fire modelling scenarios to represent typical wildfires’ behaviour, we compared the predicted fire perimeters derived from single-ignition fires, to the fire perimeter of the 2022 wildfire. It was found that in all nine cases the predicted burned area lies almost entirely within the reference fire perimeter, albeit with low coverage. Thirdly, 50 users were asked to complete a usability study by using the interactive web map. According to the answers, the design of the interactive map is intuitive, and the resulting product, though it presents complex information, does so in an understandable way. Furthermore, the views expressed by relevant stakeholders on the map’s usefulness questionnaire revealed the need to include local stakeholders and experts early on, and throughout such a research process. These results have demonstrated that the modelling scenarios can indeed be used to predict wildfire behaviour in the area, albeit with limited confidence as more validation data, such as historical fire perimeters are needed. Scarce validation data creates a degree of uncertainty when it comes to accurately evaluating the modelling outputs. This lack of data though, is not an issue of the fire modelling software per se, but rather, highlights the importance of maintaining information on the characteristics of historical wildfires by state authorities. Moreover, a general recommendation for future wildfire research in this or any other study area, is to establish good communication with the local expert and stakeholder community. Expert knowledge is invaluable in developing accurate fuel model maps, while stakeholders should be consulted in various parts of the wildfire research process, and not only towards the end. This research extensively exploits remote sensing data such as Landsat 8 images, to identify particularly flammable areas. It should be noted that higher-resolution Sentinel-2 imagery from the period immediately before the 2022 wildfire would have been preferable, however this was not possible because of cloud cover. Moreover, the use of VIIRS played an essential role in the evaluation process. Finally, making an exposure map for a transboundary region has highlighted the importance of data interoperability between different countries. References Aragoneses, E., García, M., Salis, M., Ribeiro, L. M., & Chuvieco, E. (2022). Classification and mapping of European fuels using a hierarchical-multipurpose fuel classification system [Preprint]. ESSD – Land/Land Cover and Land Use. https://doi.org/10.5194/essd-2022-184 Beetz, K., Marrs, C., Busse, A., Poděbradská, M., Kinalczyk, D., Kranz, J., & Forkel, M. (2024). Effects of bark beetle disturbance and fuel types on fire radiative power and burn severity in the Bohemian-Saxon Switzerland. Forestry: An International Journal of Forest Research, cpae024. https://doi.org/10.1093/forestry/cpae024 Scott, J. H., & Burgan, R. E. (2005). Standard fire behavior fuel models: A comprehensive set for use with Rothermel’s surface fire spread model (No. RMRS-GTR-153; Issue RMRS-GTR-153, p. RMRS-GTR-153). U.S. Department of Agriculture, Forest Service, Rocky Mountain Research Station. https://doi.org/10.2737/RMRS-GTR-153
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SGAM - Smart Geotechnical Asset management

Authors: Emanuela Valerio, Alessandro Brunetti, Maria Elena Di Renzo, Michele Gaeta, Prof. Paolo Mazzanti
Affiliations: NHAZCA S.r.l., Sapienza University di Rome
Smart Geotechnical Asset Management (SGAM) is an innovative framework integrating external systems via a cloud-based Software as a Service (SaaS) platform or API. It leverages advanced data-fusion algorithms and satellite Earth Observation (EO) technologies, such as A-DInSAR and PhotoMonitoring™, to enable a semi-automatic decision-making process for asset management and predictive maintenance. This approach significantly enhances the financial resilience and operational efficiency of structures and infrastructures by optimizing maintenance investments through sophisticated, data-driven insights. SGAM focuses on identifying, analyzing, and mitigating risks to assets by examining their interactions with local geological and environmental settings. It systematically evaluates both direct and potential interferences with geohazards, including landslides, floods, subsidence, and earthquake-induced effects, which could compromise asset integrity. By integrating vast quantities of archived and newly acquired EO data, SGAM provides Decision Makers with detailed and actionable insights, enabling them to define, prioritize, and schedule maintenance operations more effectively based on comprehensive asset vulnerability and loss scenario analyses. The EO data is further enriched and validated through field surveys as well as Geotechnical/Geomorphological Monitoring technologies sourced from extensive regional and global geodatabases. A core feature of SGAM is its adaptability and forward-looking design, which allows seamless integration of satellite data from different space missions, ensuring its long-term relevance, scalability, and technological advancement. AI-driven Process Automation solutions enhance its capabilities by performing first-level risk assessments, facilitating cost-effective, optimized prioritization of maintenance activities, and enabling decision-making underpinned by redundancy and precision. By seamlessly combining advanced satellite EO technologies, AI algorithms, and ground-based monitoring data, SGAM empowers organizations to proactively address structural and geotechnical risks. It not only reduces the likelihood of asset failure but also ensures sustainable, informed, and timely decision-making. Through prioritization of maintenance operations founded on comprehensive risk evaluations, SGAM is instrumental in enhancing infrastructure resilience, safety, and long-term sustainability amidst both current and future geohazards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Applying Copernicus Satellite Data for Geo-Hazard Monitoring and Warning Services in Norway

Authors: Solveig Havstad Winsvold, Stefan Blumentrath, Aron Widforss, Kjetil Melvold, Karsten Müller, Liss Marie Andreassen, Sjur Kolberg, Rune Engeset, Nils Kristian Orthe
Affiliations: Norwegian Water Resources and Energy Directorate
The European Union's Earth Observation Program, Copernicus, provides free and openly accessible satellite data and services. These have become essential for hydro-meteorological and geo-hazard monitoring conducted by the Norwegian Water Resources and Energy Directorate (NVE). Copernicus satellite data provides efficient and comprehensive observation of snow avalanches, snow cover, lake ice, glaciers, and more, across large regions. Its applications are becoming increasingly important for risk assessment, natural hazard management, emergency preparedness, and warning services. Products from the Copernicus project at NVE support decision-making for Varsom, NVE’s warning services for snow avalanches, landslides, lake ice, and floods (www.varsom.no). At NVE, satellite products are combined with other data sources, such as crowd-sourced in situ observations through the Varsom app and additional remote sensing data, forming a multi-modal approach. In addition to supporting decision-making, Copernicus satellite data and products enhance process understanding, improve NVE's basemaps, and facilitate analyses of climate change impacts. NVE's Copernicus Services is managed in-house at NVE and co-funded by the Norwegian Space Agency. The project serves as a pioneer for IT infrastructure development at NVE, establishing production lines that streamline the process from satellite image acquisition and algorithm application to the distribution of resulting products directly into users' familiar working environments and applications. This presentation demonstrates how NVE utilizes automated satellite products to support warning services for geo-hazards such as snow avalanches. Furthermore, it will highlight how NVE validates flood forecasting models using snow cover products. Snow avalanches and floods pose risks to Norway’s environment and infrastructure, and through its warning services, NVE helps prevent accidents and mitigate potential impacts. Additionally, NVE observes glacier lake outburst flood (GLOF) activity. The presentation also provides an outlook on new products planned for the coming years, such as landslide detection, which will enhance warning and preparedness for extreme weather events that are becoming increasingly frequent in Norway.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detecting Changes in War-Damaged Urban Areas Using the IR-MAD Method and Sentinel-2 Satellite Data

Authors: Jáchym Černík
Affiliations: Charles University
This study presents a method for detecting urban changes resulting from the October 2023 military conflict in Gaza City and its surrounding areas. Using Sentinel-2 multispectral data on the Google Earth Engine (GEE) platform, Python scripts were employed to analyze spectral signatures over time. The Iteratively Reweighted Multivariate Alteration Detection (IR-MAD) technique was utilized to identify differences between pre- and post-conflict images. IR-MAD is a multivariate statistical method that enhances change detection by iteratively reweighting spectral bands through Canonical Correlation Analysis, aligning two images to maximize similarity before subtraction. This approach increases sensitivity to subtle changes while minimizing the detection of insignificant alterations by improving the correlation of unchanged pixels. Consequently, the method effectively identified changes such as debris, destroyed buildings, vegetation loss, and craters with high precision by comparing Sentinel-2 images. Change detection results were validated using high-resolution PlanetScope data, achieving an accuracy of 74%. Custom Python scripts further enhanced the IR-MAD analysis by incorporating functions for masking with the Dynamic World dataset and automating image export and thresholding. This streamlined processing enabled efficient handling of large datasets, making the approach scalable for similar conflict-affected regions like Ukraine. The IR-MAD analysis revealed a 52% change between September 27 and November 26, 2023. Additionally, applying a chi-square distribution-based threshold together with iterative threshold optimizer improved the consistency and accuracy of binary change maps, which could be valuable for damage assessment and resource allocation. While a web-based mapping application was developed to visualize the conflict's impact, the primary focus remains on the analytical framework. In conclusion, the study successfully applied and validated the IR-MAD algorithm with Sentinel-2 data to detect changes in war-affected urban areas and developed specialized Python scripts to enhance the analysis. This methodology provides a reliable and straightforward framework for monitoring urban changes in conflict zones. In an era of declining public trust in media, this study offers a methodological foundation for an independent approach to scientific reporting in war-torn areas using publicly sourced data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Supporting Flood Disaster Response Using Multi-Sensor Earth Observation Data

Authors: Sandro Martinis, Marc Wieland, Sandro Groth, Hannes Taubenböck
Affiliations: DLR
Remote sensing data has become an essential part of today's crisis management activities. In recent years, the German Aerospace Center (DLR) has developed various components to support flood disaster response using multi-sensor Earth Observation (EO) data. On a global level, a multi-sensor system for automatic and large-scale surface water extraction was implemented. The system consists of several cloud-based modular processing chains based on convolutional neural networks (CNN) to extract the surface water extent from systematically acquired high-resolution radar (Sentinel-1) and multi-spectral (Sentinel-2 and Landsat) satellite data. A globally applicable high-resolution seasonal reference water product at 10-20 m spatial resolution based on fused Sentinel-1/2 time-series data over a reference period of two years is computed and used to distinguish permanent water from temporary flooded areas. The system also allows to provide information about the duration of flood coverage on a pixel level by combining single temporal flood masks over time. Further, a mechanism has been installed to identify whether the water extent outlined in a satellite scene is abnormally large or small in comparison to a reference period. The anomaly detection criterium is based on the interquartile range (IQR). In case of observed anomalies, end users are alerted using E-Mail notifications. To enhance situational awareness, early-stage estimations of impacted regions derived from heterogenous geospatial indicators can aid to prioritize the crisis management activities and support data collection initiatives of very high-resolution (VHR) EO imagery (satellite, aerial, UAV). In this context, a log-linear pooling method coupled with an unsupervised hyperparameter optimization routine is developed to fuse information on flood hazard extracted from high-resolution satellite imagery with disaster-related data from geo-social media and freely available supplementary geospatial data on exposed assets (e.g. building distribution, population density, hazard zones). The identification of disaster hot spots is carried out on the basis of the H3 global grid system. Very high-resolution EO data tasked in the frame of an activation of a crisis mechanism on-demand and supported by rapidly generated disaster hot-spot maps, are analyzed using deep learning-based approaches in the frame of the multi-sensor EO system. The spatial resolution of these sensors enables the identification of relevant local crisis information, e.g. small-scale flood extent in heterogenous landscapes as well as damaged buildings and infrastructure to provide a detailed and reliable picture of flood-affected areas in order to optimize disaster management activities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Holistic approach to flood risk assessment: innovative multi-parameter methodology validated in urban river basin affected by fatal flash flood

Authors: Alexia Tsouni, Stavroula Sigourou, Vasiliki Pagana, Panayiotis Dimitriadis, Theano Iliopoulou, G.-Fivos Sargentis, Romanos Ioannidis, Efthymios Chardavellas, Dimitra Dimitrakopoulou, Marcos Julien Alexopoulos, Nikos Mamasis, Demetris Koutsoyiannis, Charalampos (Haris) Kontoes
Affiliations: National Observatory of Athens (NOA), Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing (IAASARS), Operational Unit “BEYOND Centre of Earth Observation Research and Satellite Remote Sensing”, National Technical University of Athens (NTUA), School of Civil Engineering, Department of Water Resources and Environmental Engi-neering, Research Group “ITIA”
Decision makers and civil protection authorities need reliable flood risk assessment for efficient flood risk management, covering all the phases of the disaster risk reduction framework: preparedness, response, recovery and mitigation. This is even more crucial in highly dense urban river basins which are prone to flash floods. In the framework of a Programming Agreement with the Prefecture of Attica, Greece, BEYOND/IAASARS/NOA in cooperation with ITIA/NTUA developed a holistic multi-parameter methodology which was implemented in five flood-stricken river basins at high spatial resolution (2m-50m). The research teams first of all collected all available data, such as spatial data and data from technical studies from the relevant authorities. They conducted detailed field visits, and modified the terrain accordingly. Spatial parameters obtained following processing of Earth Observation data, such as DEM and land cover, were used as input for the HEC-HMS rainfall-runoff model, as well as for the hydraulic model. Flood hazard was assessed by hydraulic modelling; using the open-source software HEC-RAS 2D for different scenarios. Vulnerability was considered as a weighted estimation of population density, population age, and building character-istics, taking into consideration the relevant finding of the latest available national Population-Housing Census. Exposure was based on the land value. Flood risk was eventually assessed based on the combination of flood hazard, vulnerability, and exposure. Moreover, critical points, which were identified from the field visits, were also cross-checked with the flood inundation maps. Finally, refuge areas and escape routes were proposed for the worst-case flood scenario. This innovative methodology was applied, amongst others, in the Mandra river basin, and was validated with the results of the fatal flash flood which took place in November 2017. This flash flood event affected the urban and suburban area of Mandra causing 24 recorded fatalities and extensive mil-lion-euro damages to properties and infrastructure, rendering it the deadliest flood in Greece in the last 40 years. BEYOND de-veloped a user-friendly web GIS platform, where all the collected and produced data, including the flood risk maps, the critical points, the refuge areas and the escape routes are made available. This work supports the relevant authorities in improving disaster resilience in many aspects: raising awareness, designing civil protection exercises, implementing flood risk mitigation measures, prioritising short-term and long-term flood protection interventions, and making rapid response more effective during the flood event. This approach is in line with the requirements for the implementation of the EU Floods Directive 2007/60/EC, the Sendai Framework for Disaster Risk Reduction, the UN SDGs, as well as the UN Early Warnings for All initiative. Last but not least, this flood risk assessment methodology was applied, following the necessary adaptations, in the Garyllis river basin in Cyprus, in the framework of the EXCELSIOR project, by the ERATOSTHENES Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An integrated system for multi-hazard response based on multi-source EO and non EO: the contribution of IRIDE Service Segment

Authors: Giorgo Pasquali, Annalaura Di Federico, Chiara Francalanci, Paolo Ravanelli, Lucia Luzietti
Affiliations: e-GEOS S.p.A., Cherrydata Srl
Catastrophic events such as natural disasters, including earthquakes, hurricanes, floods, and wildfires, pose significant challenges to societies worldwide. These phenomena not only result in devastating human and environmental losses but also place immense pressure on emergency response systems. In this context, satellite monitoring has emerged as a powerful tool for disaster management, offering comprehensive insights that enhance situational awareness. By providing high-resolution imagery, precise geolocation, and continuous updates, satellite technology enables more effective planning, rapid response, and resource allocation, ultimately mitigating the impact of such events. In this context, the fundamental contribution of the European flagship program Copernicus Emergency Management Service (CEMS) - Rapid Mapping is indisputable. Nevertheless, to further enhance the benefit of this type of services focussing on the Italian territory, IRIDE Service Segment is developing a dedicated emergency service focused on Italy. IRIDE Service Segment S7 Emergency will stand out for its high level of automation and its provision of a comprehensive system to Italian institutions for emergency response. This system will leverage not only the currently available commercial constellations but also the IRIDE constellations and will build uponexploit the state of the art algorithms and methods, to reach unprecedent level of characterised by an high level of automation and high accuracy, leveraging, among others, AI algorithms, . The system will be cloud-based, ensuring easy scalability and high resilience. It will include everything necessary to efficiently meet the needs of Italian institutions. This starts with the Service Manager, which allows users to input essential information to activate the service quickly and easily. It also features a specific Service Value Chain (SVC), responsible for acquiring and processing satellite data tailored to the type of event (e.g., earthquake, flood, landslide). Finally, the Exploitation Tool provides a platform for visualizing and utilizing the generated products, with the option to download them as needed. This tool will also enable the direct integration of products into end-user systems, allowing seamless visualization and incorporation of the outputs into the user’s operational workflow. Specifically, IRIDE S7 Emergency services will provide capabilities that exceed the current state of the art, not only in terms of processing algorithms and automation, as previously mentioned, but also in terms of service performance and the introduction of innovative products. Among these, the extremely short response times stand out: delineation products for areas affected by an event will be delivered within just 4 hours of satellite data availability, and damage assessment products within 9 hours. These rapid response times have a significant impact on emergency management, where the ability to act quickly makes a critical difference. By reducing these times, competent relevant authorities can act more effectively. IRIDE S7 Emergency innovation spans across several thematic domain, for instance for Flood detection, a new methodology has been designed by CIMA to perform a continuous flood monitoring activity in near real-time using on-demand SAR data. The most groundbreaking advancement in response times, however, comes with the introduction of the FIP (First Information Product), developed in cooperation with Cherrydata. While traditional product generation times have improved, the main bottleneck remains the waiting period for satellite data availability, which can often average up to 24 hours. To address this challenge, IRIDE S7 Emergency introduces the FIP product to deliver a preliminary information about the impact of the event even before satellite data becomes available. The FIP provides an estimation of the affected area within just 3 hours of activation, significantly improving response times and enabling earlier decision-making. The FIP is generated by collecting information about the event from social media and online news sources. Using Natural Language Processing (NLP), the system geolocates the information, correlating it to the specific event. This approach enables the creation of a geolocated map of information about the event, offering initial insights into the areas reported as most affected and the overall extent of the event. The FIP includes at least two deliveries: the first is made 3 hours after activation, and the second is delivered 6 hours after activation. The second delivery integrates additional information gathered in the interim, still before satellite data becomes available, to keep the end user informed about the evolving situation using the latest available sources. The aim of the FIP is to ensure that users remain continuously updated on the situation until satellite data can be utilized, bridging the gap in information during the critical early hours of an emergency. In conclusion, IRIDE S7 Emergency will provide Italian institutions with a system capable of generating products for emergency response in an effective and automated manner. This system will be tailored to meet the specific needs of Italian authorities, integrated into their systems, and enhanced by advanced algorithms. It will deliver performance and products that go beyond the current state of the art, setting a new standard for emergency management solutions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: VALUESAFE project - Vulnerability of Assets and Losses in Multirisk Evaluations: Satellite Data for Financial Estimation. Combining Engineering Risk Analysis, Satellite Observations, and Artificial Intelligence

Authors: Alberto Ciavattone, Neri Banti, Emanuela Valerio, Adriano Nobile, Claudia Masciulli, Antonio Cosentino, Emanuele Del Monte, Paolo Mazzanti
Affiliations: S2R S.r.l., Viale Giovanni Amendola 24, 50121, NHAZCA S.r.l., Start-up of Sapienza University of Rome, Via Vittorio Bachelet 12, 00185, IntelligEarth S.r.l., Start-up of Sapienza University of Rome, Via Vittorio Bachelet 12, 00185
Vulnerability of Assests and Losses in mUltirisk Evaluations: SAtellite data for Financial Estimation (VALUESAFE) represents a groundbreaking advancement in real estate risk assessment, offering an integrated, multi-hazard evaluation platform powered by advanced satellite Earth Observation (SatEO) technology and cutting-edge image processing solutions. Designed to assess vulnerabilities and risks associated with seismic, geological, and flooding hazards, VALUESAFE addresses the growing demand for comprehensive, standardized, and actionable risk assessments across public and private sectors. The service is developed through the collaboration of three leading Italian companies, each contributing unique expertise, and it is supported by the ESA Incubed program. VALUESAFE addresses a significant gap in the market, where existing risk assessment methods are time-intensive, resource-heavy, and often lack standardization. For instance, Italy's annual expenditure on hydrogeological damage mitigation exceeds €3.3 billion, while earthquake recovery costs have reached €120 billion in recent decades. These figures underscore the urgent need for efficient and reliable tools to safeguard vulnerable assets, particularly in historical urban areas. The VALUESAFE platform operates on a multi-layered framework, beginning with territorial hazard evaluations, advancing to building-specific vulnerability assessments, and culminating in detailed economic impact analyses. By integrating remote sensing data, engineering insights, and financial metrics, the platform delivers certified reports tailored to stakeholder needs. These reports, validated by qualified professionals, provide a credible and practical decision-support tool. This comprehensive methodology not only enhances assessment reliability but also significantly reduces time and resource requirements. Key features of VALUESAFE include its ability to cater to diverse operational scales. Public administrators can use the platform for territorial risk management, while private stakeholders such as insurers and property managers can assess risks for specific assets. The platform's flexibility ensures consistent evaluations across different building types and urban contexts, including historically significant structures. Furthermore, the incorporation of economic depreciation forecasts linked to disaster scenarios offers invaluable insights for resource allocation and investment planning. VALUESAFE leverages advanced technological solutions to streamline its processes. InSAR data enhances ground motion and structural stability assessments, while AI-driven image processing delivers precise evaluations of building conditions. These innovations enable the system to address seismic, ground instability, and flooding risks with tailored methodologies. By standardizing vulnerability assessments and integrating multi-source data, VALUESAFE achieves consistent results while saving time and resources. A user-friendly online platform enhances accessibility, allowing stakeholders to customize analyses and download certified reports efficiently. VALUESAFE aligns with global sustainability goals by emphasizing proactive risk management and the preservation of cultural heritage. Its comprehensive approach reduces reliance on post-disaster recovery efforts, minimizing economic and environmental impacts. Moreover, the platform's certified reports meet the needs of urban managers and insurers, providing reliable, actionable insights for long-term planning. VALUESAFE aims to establish new standards in real estate risk assessment by delivering scientifically robust, economically relevant, and operationally efficient solutions. Through its innovative methodologies and user-centric design, VALUESAFE addresses critical market needs, offering a transformative tool for safeguarding assets and enhancing urban resilience in the face of natural hazards.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Population Displacement and Response During Flood Events: Towards A Global Perspective

Authors: Ekta Aggarwal, Zhifeng Cheng, Shengjie Lai, Laurence Hawker, Andrea Gasparotto, Andrew J Tatem, Steve Darby
Affiliations: School of Geography and Environmental Science, University of Southampton, Worlpop, School of Geography and Environmental Science, University of Southampton, School of Geographical Sciences, University of Bristol, UK
Flooding, already the world’s most significant natural hazard, is expected to increase in frequency and intensity because of social and environmental change. Flood events can induce human mobility, both as an immediate adaptation to individual flood events, and in terms of permanent mobility away from at-risk areas. However, accurately quantifying both short- and long-term mobility patterns across large areas remains challenging. Traditional approaches, such as the use of census data and household and travel surveys, have provided critical insights into migration induced by environmental stress but are limited in terms of their spatial and temporal resolutions and geographic scope. One potential way to help address the gaps for measuring population displacement and response during flood events is through the use of high-resolution human mobility data, for example as derived from Meta’s Data for Good database, and geospatial data. The gridded user count data from Facebook users, generated by the Data for Good programme at Meta offers a rich source for tracking migration and displacement, displacement for crises such as disease outbreaks, flooding, and tropical cyclones across the globe, particularly in low- and middle-income countries where alternative mobility data are sparse. Leveraging anonymised mobile phone and internet location history data, this research investigates human mobility during extreme weather events, focusing on floods in socio-economically vulnerable regions. A high-resolution global flood database is employed alongside satellite-based nightlight data and Meta mobility data to provide near real-time insights into population movements and behaviours before, during, and after floods, providing greater insight into the impacts of flooding. Our findings may therefore be useful to civil defence and humanitarian agencies, enhancing their preparedness and response efforts in regions where flood infrastructure and resources are often limited.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EO-enhanced Hydrology: How ESA EO R&D activities could enable an Early Warning System for smarter Drought Management – A case study of the 2022 French Droughts

Authors: Greg Sadlier, Luca Niccolai, Sara Cucaro, Alyssa Frayling
Affiliations: know.space
Note: Subject to ESA agreement to publicly share findings from our unpublished but non-confidential ‘Climate Crisis: Droughts - EO R&D activities for water resources management’ pilot case study for ESA (ref: Eleni Paliouras; Vanessa Keuck). The increasing frequency and severity of climate-related events, such as droughts, highlights the urgent need for innovative tools to support public sector decision-making. Earth Observation (EO) technologies offer transformative potential by providing high-resolution data, further enhanced by advanced analytical tools, including Artificial Intelligence (AI) and Machine Learning, to improve governance and enhance resilience. This presentation outlines a case study on the 2022 droughts in France, applying an analytical framework to evaluate the impact of five EO R&D activities on governance and their contributions to mitigating the social, economic, and environmental effects of droughts. The analytical framework is structured around the four pillars of climate change impacts —governance, social, economic, and environmental. It assesses the effects of extreme weather events by defining specific indicators, applying valuation methods (where relevant), and identifying appropriate data sources. Governance indicators capture improvements in decision-making and operational response efficiency, while economic indicators quantify cost savings or avoided losses. Social and environmental indicators measure reduced impacts on vulnerable communities and ecosystems. Designed to be adaptable, this framework provides a scalable tool for evaluating the impacts of other extreme weather events, offering actionable insights for policymakers and practitioners. The case study examines five EO R&D activities - Next Generation Gravity Mission (NGGM), DT-Hydrology, Soil Moisture, 4DMED-Hydrology, and AI4DROUGHT. The latter, in particular, leverages AI to enhance drought monitoring and forecasting, providing advanced tools for analysing water cycle dynamics. Collectively, these activities, at varying stages of development and operation, improve data availability and decision-making tools, equipping practitioners with the means to anticipate and manage changes effectively. These advancements support more efficient operational responses, reducing the impacts of droughts on communities, industries, and ecosystems. Central to the analysis is the governance pillar of sustainable development, focusing on early warning systems and operational preparedness. The study quantifies socio-economic benefits, including potential cost savings achieved through enhanced early warning systems and response measures, while also exploring broader impacts across social, economic, and environmental dimensions. The findings demonstrate the critical role of sustained EO R&D investment in strengthening public sector governance, improving decision-making, and building resilience against extreme weather events.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Use of Satellite Technologies in Mapping Flood Extent and Analysis of Its Impact on the Availability of Ambulances in Flood Areas

Authors: Jakub Niedźwiedź, Adrian Bobowski, PhD Michał Lupa, Jakub Staszel
Affiliations: AGH University, Faculty of Geology, Geophysics and Environmental Protection, Space Technology Centre AGH
The use of satellite data has revolutionized crisis management. In the face of increasingly frequent and severe natural disasters, predicting the extent and locations of events such as floods has become crucial. Issues like unregulated riverbeds, urbanization through concreting, and excessive deforestation have exacerbated the problem of natural disasters. However, floods are not just about direct material losses or infrastructure damage. To address these challenges, we conducted a project analyzing the extent of a flood that struck southeastern Poland in September 2024 and its impact on ambulance routes and response times. Using satellite imagery, including both SAR (Synthetic Aperture Radar) and optical instruments, we delineated the flood extent in the most affected areas within the Lower Silesian and Opole Voivodeships. By integrating GPS data from ambulances, we superimposed a grid of points on road networks. Then we adjusted the road lengths in flooded areas to determine the fastest routes to emergency calls. After analyzing the changes in ambulance routes caused by inundated transport infrastructure, we created an ambulance access map highlighting areas cut off from emergency services during the flood. Our analysis revealed the necessity of considering indirect effects. Beyond impacting ambulance response times, the flood significantly reduced the availability of essential medical and logistical resources, complicating rescue coordination efforts. The findings from this project have broad potential applications in future crisis management. The identified challenges can help optimize planning for alternative routes and prioritize investments in disaster-resilient infrastructure. Future ambulance stations and algorithms for alternative route searches should account for flood-related infrastructure losses, ultimately improving the safety of residents who were previously at greater risk due to limited ambulance accessibility during natural disasters. The project's methodology can also be adapted to analyze larger areas. Furthermore, the flood extent map we developed can already be used to safeguard existing structures, enhancing the safety of people living in flood-prone areas. Potential stakeholders for such solutions include crisis management teams, governmental and local institutions, investors planning future developments in flood-risk areas, and residents seeking to assess the risk of ambulance inaccessibility to their homes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detection Of The Green Attack Stage Of Bark Beetle Infestation Using Sentinel-1 Time Series

Authors: M. Eng. Christine Hechtl, Andreas Schmitt, M. Sc. Sarah Hauser, Dr. Anna Wendleder, Dr. Marco Heurich
Affiliations: Hochschule München University Of Applied Sciences, Bavarian Forest National Park, Institute for Applications of Machine Learning and Intelligent Systems (IAMLIS), German Aerospace Center (DLR)
The innovative approaches of remote sensing open up new dimensions in forest monitoring and thanks to the exhaustive and almost continuous surveying in the protection and long-term strengthening of the complex ecosystem. Especially in times of climate change, the application of such methods is essential in order to actively meet the challenges. On the one hand, global warming, extreme precipitation events, long periods of drought and the simultaneous increase in biotic disturbance factors are threatening forest areas and changing their dynamics. On the other hand, the changing environmental conditions favour the spread of invasive species such as the bark beetle. The spruce trees, weakened by drought among other things, can no longer defend themselves sufficiently against the pests and consequently die. The sharp increase in forest mortality is a mammoth task for forest workers, making it more and more difficult to stop calamities such as bark beetle infestation. Reliable information on the current conditions of the forest and the changes is therefore required, which serves as a basis for decision-making for the timely initiation of countermeasures. Due to the large spatial extent, terrestrial data collection is no longer feasible, which emphasizes the need for a remote sensing-based approach. Until now, bark beetle infestation has mainly been analysed using optical data. However, according to the current state of research, there is no practical method that can precisely and promptly detect bark beetle infestation in the “Green Attack Stage” timely before the death of the trees. This is because the major challenge in detecting an infestation is that the initial signs are very subtle and the time of discolouration of the tree crown is already too late to combat it. In addition, the optical data does not allow for continuous bark beetle monitoring due to cloud cover, especially over forested areas in low mountain ranges. In this context, the question arises whether a Sentinel-1 time series can be used to depict the vitality development of spruce trees and thus, the vulnerability to bark beetle infestation. The radar system emits microwaves that penetrate the clouds and thus, ensure continuous data collection. As a result, up to four images per month are continuously available for the analyses. The study area is located in the south-east of Germany in the Bavarian Forest National Park on the border with the Czech Republic. Together with the Šumava National Park, it is the largest contiguous protected area in Central Europe. Over the years, a unique biodiversity has developed on an area of almost 25,000 hectares in the national park, as human intervention is only permitted under international guidelines, meaning that natural processes shape the ecosystem. One of the consequences of this is that the bark beetle infestation is not combated and the deadwood remains in the forest, allowing the development of the forest from healthy to infested to deadwood to be observed. Since 1988, the deadwood has been digitalized of the Bavarian Forest National Park Administration and provided in a data pool [1]. This database was used to train and validate the implemented machine learning models. Furthermore, additional data was also taken into account. In view of the strong impact of drought on the forest ecosystem described in the literature, hydrological data using the Topographic Wetness Index (TWI) and the predominant soil type are also included in the analysis. The TWI represents the relief-related soil moisture and is therefore a good indicator for soil hydrology, especially for the hilly terrain in the Bavarian Forest. Based on the digital terrain model and information on the water catchment area, the run-off behaviour can be determined, which is decisive for the available moisture in the soil [2]. Due to the different water storage capacities of the various soil types, the soil map at a scale of 1:25000 is also used as part of the data basis [3]. From this, it is possible to deduce exactly which soil is located under the tree population and thus, how high its water storage capacity evolves. In this study, Sentinel-1 data of the European Copernicus program from April to October in the years 2020 to 2023 was used. This period also corresponds to the swarming flight of the bark beetles. In addition, not only the bark beetle infestation is analysed, but the vitality development of the spruce trees up to three years before the infestation is also included in the analysis. The radar data was pre-processed at the German Aerospace Center (DLR) by the Multi-SAR system. The most important steps are as follows: 1) Decomposition of the Sentinel-1 data into Kennaugh elements 2) Application of multi-looking to reduce noise 3) Orthorectification using the Copernicus digital elevation model 4) Radiometric calibration to flattening gamma Based on the method of an orthogonal transform on hyper-complex bases the Sentinel-1 data is broken down into the individual Kennaugh elements K0, K1, K5 and K8 [4]. The total intensity K0, as the sum of VV and VH, is sensitive to the density and moisture of the vegetation. K1 represents the difference in intensity between VV and VH, which can be used to determine whether there is an increase in volume scatterers. These capabilities allow the upright forest structure to be captured excellently. This type of processing differs from previously investigated and developed methods for bark beetle detection, as the polarizations VV and VH are considered together in fused and normalized Kennaugh elements and the data are sufficiently calibrated thanks to the flattening gamma approach [5]. As bark beetle infestation is a complex process in the forest ecosystem and is strongly linked to the drought that occurs, additional environmental data is also used. These include the monthly precipitation sum, the topographical wetness index, the soil moisture and the prevailing soil classes. Based on this data, various random forest models are created, each of which predicts the vitality level of the conifers per epoch of the time series. The combination of the Kennaugh elements, the topographic wetness index and the soil classes leads to the best results for the models that have been trained with different features. In addition to visual validation, the high quality of the results of the random forest regression is also confirmed by the R2 metrics of 83 % and 89 % and an RMSE of between 5 and 9 months. The latter indicates that, on average, the model forecasts deviate by half a year. In contrast, the inclusion of precipitation sum, soil moisture and water retention capacity does not lead to any improvement. This illustrates that a targeted selection of features is more important than the number of different features. If one also considers the influence of each feature on the decision in the Random Forest model, complex processes in the ecosystem can be understood. For example, the root structure of the spruce can be traced. Spruce trees are shallow-rooted and therefore spread their roots close to the surface. In drier areas that are further away from groundwater, they also form so-called sinker roots that grow vertically downwards. This relationship is reflected in the impact of the features on the prediction. The results show for the first time that the vitality development of coniferous trees from a healthy or already stressed state to bark beetle-induced deadwood can be derived using a Sentinel-1 time series. By considering the intensities of VV and VH together as normalized Kennaugh elements in each image, the structure of the forest can be characterized in more detail and unique features regarding the water and chlorophyll content in the spruce needles can be derived. In this way, measures can be taken promptly even during the “Green Attack Stage” if necessary. This valuable insight should be incorporated and further developed in future research. In particular, the transferability of the models to other forest areas should also be included in future studies. By taking into account other environmental data, such as evapotranspiration, further complex interactions in the forest ecosystem could also be deciphered. [1] H. Latifi u. a., „A laboratory for conceiving Essential Biodiversity Variables (EBVs)—The ‘Data pool initiative for the Bohemian Forest Ecosystem’“, Methods Ecol. Evol., Bd. 12, Nr. 11, S. 2073–2083, Nov. 2021, doi: 10.1111/2041-210X.13695. [2] Julius Kühn-Institut, „Topographischer Feuchteindex“, Julius Kühn-Institut. Accessed February 10, 2024. https://wms.flf.julius-kuehn.de/cgi-bin/twi/qgis_mapserv.fcgi [3] Bayerisches Landesamt für Umwelt, „Übersichtsbodenkarte 1:25.000“. Accessed October 19, 2024. https://www.lfu.bayern.de/boden/karten_daten/uebk25/index.htm [4] A. Schmitt, A. Wendleder, und S. Hinz, „The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation“, ISPRS J. Photogramm. Remote Sens., Bd. 102, S. 122–139, Apr. 2015, doi: 10.1016/j.isprsjprs.2015.01.007. [5] D. Small, „Flattening Gamma: Radiometric Terrain Correction for SAR Imagery“, Geosci. Remote Sens. IEEE Trans. On, Bd. 49, S. 3081–3093, Sep. 2011, doi: 10.1109/TGRS.2011.2120616.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.05.04 - POSTER - Landsat Program and Science Applications

Landsat satellites have been providing continuous monitoring of the Earth’s surface since 1972. The free and open data policy of the Landsat program enables the global land imaging user community to explore the entire 52-year long-term data record to advance our scientific knowledge and explore innovative use of remote sensing data to support a variety of science applications. This session will focus on Landsat mission collaboration and related data and science applications of Landsat data and products that provide societal benefits, and efforts by European and U.S. agencies to maximize their benefits alongside comparable European land imaging missions such as Copernicus-Sentinel 2.

A diverse set of multi-modal science applications has been enabled with Landsat and Sentinel-2 harmonization and fusion with SAR, LiDAR, high-resolution commercial imagery, and hyperspectral imagery among others. Rapid progress has been achieved using the entire Landsat archive with access to high-end cloud computing resources. Landsat data and applications have revealed impacts from humans and climate change across the globe in land-cover, land-use, agriculture, forestry, aquatic and cryosphere systems.

Building on the 52+ year legacy and informed by broad user community needs, Landsat Next’s enhanced temporal (6-day revisit), spatial (10 – 60 m), and superspectral (21 visible to shortwave infrared and 5 thermal bands) resolution will provide new avenues for scientific discovery. This session will provide updates on Landsat missions and products, and collaboration activities with international partners on mission planning, data access, and science and applications development.

We invite presentations that demonstrate international collaboration and science advancements on the above topics. We also invite presentations on innovative uses of Landsat data alone or in combination with other Earth observation data modalities that meet societal needs today and in coming decades.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Global Evaluation of Temporal Consistency and Uncertainty in Vegetation Indices Derived from NASA's Harmonized Landsat and Sentinel-2 (HLS) Surface Reflectance Product

Authors: Qiang Zhou, Margaret Wooten, Christopher Neigh, Junchang Ju, Zhe Zhu, Petya Campbell, Madhu Sridhar, Brad Baker
Affiliations: Science Systems and Applications, Inc (SSAI), contractor to NASA GSFC, NASA Goddard Space Flight Center, University of Maryland, Department of Natural Resources and the Environment, University of Connecticut, Joint Center for Earth Systems Technology (JCET), University of Maryland, NASA Marshall Space Flight Center, University of Alabama in Huntsville
NASA's Harmonized Landsat and Sentinel-2 (HLS) project recently released a suite of Vegetation Index (VI) products derived from HLS Landsat 30 m (L30) and Sentinel-2 30 m (S30) surface reflectance data. HLS data provide observations every 1.6 days on a global average, regardless of cloud, and every 2.2 days in the most data-scarce tropical regions when all four satellites data are available. VIs are useful for monitoring vegetation dynamics, such as forest loss, crop growth, and fire disturbance severity and recovery among many other applications. To ensure reliable data for scientific applications, the temporal consistency of HLS VIs is important. Previous evaluations of other VI products have often relied on field data or other existing products, which can be costly or hard to disentangle discrepancies due to varying production algorithms. The HLS dataset provides a unique opportunity for consistency assessment, as same-day L30 and S30 images of the same geographic areas are available worldwide, approximately 30 minutes apart. In this study, we evaluated 21 VIs derived from 545 same-day L30 and S30 image pairs, encompassing diverse land cover types globally. We randomly selected over 136 million cloud-free pixels from these image pairs. We calculated the normalized Root Mean Square Deviation (RMSDIQR) and R² for each VI, and found high consistency (R² > 0.94) for most VIs, except for the Chlorophyll Vegetation Index (CVI; R² = 0.5). VIs with lower consistency were typically designed for specific applications and land covers (e.g., crop chlorophyll). Therefore, we stratified the pixel pairs by Moderate Resolution Imaging Spectroradiometer (MODIS) Land Cover Types (MCD12Q1 Version 6.1). The RMSDIQR and R2 were reported for the combination of vegetation types and VIs. We also investigated factors contributing to discrepancies. Large View Azimuth Angle Differences (VAD) (> 125°) and high Solar Zenith Angles (SZ) (> 60°) increased discrepancies in most of the VIs. Large VAD indicates forward/backward scattering of the pixel pairs, and high SZ occurs in the high- or mid-latitude regions during the winter season. Additionally, we analyzed discrepancies across different levels of aerosol optical thickness as indicated by the HLS quality assessment layer, where a cloud-free pixel can have a low, moderate, or high aerosol optical thickness level. We used the VIs derived from low aerosol level cloud-free pixels as the reference to evaluate the discrepancy associated with moderate or high aerosol levels. Low-low or low-moderate (moderate-low) aerosol level pixel pairs showed the best agreement, while low-high or high-low aerosol level pairs exhibited substantial discrepancies, indicating higher uncertainty in VIs derived from high aerosol level observations. Even in low-low aerosol level pairs, some VIs showed increased discrepancies for extreme VI values. This behavior may be attributed to the soil background influence or noteworthy noise in areas with very low surface reflectance. We report specific VI value ranges of lower uncertainties, providing valuable guidance for scientific applications. The observed discrepancies and uncertainties associated with VAD, SZ, and aerosol levels highlight limitations in current atmospheric and BRDF correction algorithms. Our analysis offers essential insights for HLS data users and future algorithm development.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Forest Disturbances and Vulnerability mapping, preliminary results

Authors: Dr Giovanni D'Amico, Saverio Francini, Ruben Valbuena, Dr Gherardo Chirici
Affiliations: Department of Agriculture, Food, Environment and Forest Science and Technology (DAGRI), University of Florence, Department of Science and Technology of Agriculture and Environment (DISTAL), University of Bologna, Department of Forest Resource Management, Swedish Universityof Agricultural Sciences (SLU), Fondazione per il Futuro delle Città
Climate change and environmental stressors negatively affect forest ecosystems and biodiversity. Climate-smart forestry and restoration are acknowledged as global solutions in the European Forestry Strategy, which does prioritize sustainable management for biodiversity and climate resilience, in addition to promoting the forests multifunctionality. Consequently, understanding the effects of forest management and how forests adapt to climate change is crucial. However, data lack hinders these investigations. Therefore, standardized monitoring programs across Europe are essential for efficient planning and mitigation. Therefore, for efficient planning and mitigation, standardized monitoring programs throughout Europe are essential. In this context, the European project FORWARDS aims to bridge the current separation between ground and satellite forest information, to develop the ForestWard Observatory - a European observatory for forest climate change impacts. Specifically, based on the Google Earth Engine cloud computing capabilities, we processed approximately two hundred thousand Landsat images to provide four decades (1984-2023) of Europe-wide disturbance mapping and characterization. To characterize the detected forest change, several parameters including the severity of the disturbance, the persistence, and the number of years the forest needed to recover were predicted. Next, this detailed disturbance information was used to estimate the per pixel vulnerability of the forest to that forest disturbance, obtaining comprehensive and exhaustive information on European forest disturbances. To facilitate this challenging procedure, we developed a Google Earth Engine application that enables visualization, filtering, and downloading of each detected forest disturbances parameter. In this framework European forests harmonizing, we are developing a multi-temporal forest disturbance truth map by integrating historical multispectral Landsat data, with the most recent and accurate Sentinel-2 data. This harmonized dataset will allow on the one hand the implementation of our application in which the user can visualize all the multitemporal disturbance parameters deeper understanding European disturbances. On the other these data are crucial as multifunctionality-related variables and will constitute the basis for wall-to-wall mapping of European forests' vulnerability and resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using Landsat Evapotranspiration and Climate Data for Estimating High-Resolution Gridded and Field-scale Irrigation Water Use and Groundwater Withdrawals in the Western U.S.

Authors: Dr. Sayantan Majumdar, Rahel Pommerenke, Mr. Thomas J. Ott, Dr. Justin L. Huntington, Dr. Ryan Smith, Mr. Peter ReVelle, Mr. Matt Bromley, Mr Md Fahim Hasan, Mr. Christopher Pearson, Mr. Blake Minor, Mr. Charles G.
Affiliations: Desert Research Institute, Colorado State University
In the Western United States (U.S.), the combination of ongoing and projected droughts, rising irrigation water demands, and population growth is expected to intensify groundwater consumption. Despite the pressing need to address these challenges, most irrigation systems in this region are not equipped with flowmeters required to monitor groundwater withdrawals, which is crucial to implementing sustainable water management practices. However, metering is not a trivial solution, as meters can often be faulty or inadequately calibrated, resulting in discrepancies in the recorded readings. Therefore, developing reliable and efficient solutions for monitoring groundwater withdrawals is paramount in addressing the urgent water management concerns in the Western U.S. The existing methods for estimating withdrawals either entail significant costs and time (e.g., process-based models) or are not suited to support local-scale water management. Building on our prior research, here, we rely on Landsat actual evapotranspiration (ET) from OpenET, Landsat-derived irrigation masks (IrrMapper), irrigation data (field boundaries, water source type), and climate datasets (gridMET, CONUS404, Daymet) to estimate annual groundwater withdrawals, irrigation water use (i.e., consumptive use), and irrigation efficiencies in Nevada, Oregon, and Arizona. We use statistical (linear regression and bootstrapping) and machine learning (Random Forests, XGBoost, LightGBM) approaches and compare our groundwater withdrawal estimates with in-situ meter data at multiple spatial scales— field (30 m-100 m), local (2 km), and individual groundwater basin scales. We also evaluate these regression models based on temporal (leaving out multiple years from the model training) and spatial holdouts (leaving out multiple groundwater basins from the model training). Our models can explain 50%-80% variance in withdrawal depths and 90% variance in withdrawal volumes across these spatial scales and evaluation strategies. The estimated irrigation efficiencies (80%-90%) also align with known irrigation system efficiencies in the study areas (Nevada, Oregon, and Arizona). While these groundwater withdrawal estimates can be further improved, we consider our approach to be more accurate than simply relying on common water right duties, potential crop ET-based estimates, or assumed values. Ultimately, we aim to empower water resource communities by improving water budget information and facilitating the implementation of groundwater management plans throughout this region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The ESA Landsat 1-5 MSS Analyse Ready Data Products, an initiative to extend multi spectral surface reflectance time series back to the 1970’s

Authors: SEBASTIEN SAUNIER, Fay Done, Samantha Lavander, Sabrina Pinori, Roberto Biasutti, Philippe Goryl
Affiliations: Telespazio France, Telespazio Vega UK, Serco, ESA/ESRIN
The land monitoring community expects consistent and harmonised datasets, spanning a significant period of time, in order to derive Essential Climate Variables (ECV). Within this context, the ESA Landsat L1 data archive, which covers the entire duration of the NASA / USGS Landsat Programme (initiated with the launch of Landsat 1 in 1972), provides an outstanding source of data. The ESA Data Services Initiative (DSI) / Systematic Landsat Processor (SLAP projects (2010 – 2020) have provided a good opportunity to reach some major milestones, with regards to Landsat Level 0 (L0) data consolidation, Level 1 (L1) data processing and, finally, Landsat product data quality (Saunier, 2017). Many IDEAS / QA4EO (ESA Contracts) experiments put in evidence that after the DSI bulk reprocessing, beside being compatible with NASA Collection 1 products (USGS Website), ESA archive might be used to produce consistent and long time series (proceeding of the Multi Temporal conference in 2017, (Saunier, 2017)). In order to optimize dataset consistency and its interoperability (with other sources), the Committee on Earth Observation Satellites (CEOS) suggested concept of Analysis Ready Data (ARD), (CEOS ARD Website). Extending the Multi Spectral records back to the 1970s to be CEOS ARD compatible is challenging, but is definitely crucial for global change science and applications In this presentation it is proposed to introduce the ARD self-assessment framework and its translation in the context of the ESA MSS Level 1C products. Then, starting from product family specification items and associated threshold level, different options of technical improvements captured from results of algorithm / processing experiments are detailed. It is shown that beside metadata and image quality improvements (missing data), improvements in domain related to cloud shadow masking, geometric correction and atmospheric correction would make ESA MSS data CEOS ARD compatible. The presentation demonstrates that technical solutions exist and are possible, mostly thanks to major achievements that occurs in the last decades, notably, in the field of artificial intelligence, computer vision, processing performance and climatological data re analysis. -- References -- S. Saunier, F. Done, S. Lavender, R. Biasutti, P. Goryl “On the use of Radial Basis Functions to improve geometric accuracy of the ESA Landsat MSS historical archive”, VH RODA 2024, ESRIN, December 2024 (POSTER) . S. Saunier, « Bulk processing of the Landsat MSS/TM/ETM+ archive of the European Space Agency: an insight into the level 1 MSS processing », in Image and Signal Processing for Remote Sensing XXIII, J. A. Benediktsson, Éd., Warsaw, Poland: SPIE, oct. 2017, p. 1. doi: 10.1117/12.2278633. ESA Landsat MSS Catalog: https://landsatdiss.eo.esa.int/socat/LandsatMSS/ USGS Website Landsat Collection 1: https://www.usgs.gov/landsat-missions/landsat-collection-1 S. Saunier et al., « European Space agency (ESA) Landsat MSS/TM/ETM+/OLI archive: 42 years of our history », Brugge, Belgium, juin 2017. http://ieeexplore.ieee.org/document/8035252/ CEOS ARD Website, https://ceos.org/ard/
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Aboveground biomass prediction in tropical forests with a multi-modal approach and temporal features from HLS data

Authors: Rodrigo Leite, Dr. Qiang Zhou, Margaret Wooten, William Wagner, Dr. Christopher Neigh
Affiliations: NASA Postdoctoral Program Fellow, Goddard Space Flight Center, Biospheric Sciences Laboratory, NASA Goddard Space Flight Center, Science Systems and Applications, Inc (SSAI)
Quantifying and monitoring aboveground biomass (AGB) in tropical forests is essential for supporting conservation and restoration initiatives of these ecosystems. NASA’s Global Ecosystem Dynamics Investigation (GEDI) lidar data integrated with multisource remote sensing imagery has been used to provide AGB predictions at large scales. Tropical forests, however, often present high growth rates and dense canopy cover that can limit the ability of this approach to fully capture the AGB variability. Understanding these limitations across forest age and AGB ranges is essential for enhancing AGB predictions for forest monitoring over time and informing remote sensing-based growth models. Exploring the high temporal coverage of products such as the Harmonized Landsat and Sentinel-2 (HLS) offers a valuable opportunity that has not been fully explored. In this study, we explore a multi-modal data fusion approach leveraging HLS to predict AGB in tropical forests. The initial experiments focus on assessing forest patches located in Minas Gerais, Brazil, within the Atlantic Forest domain. The two main vegetation types in the region include Dense Rainforest and Seasonally Dry Semi-deciduous Forest. We calculated vegetation indices from HLS annual mosaics to use as predictors in a Random Forest (RF) model, with GEDI L4A AGB serving as the reference dataset. The upscaling approach consists of extracting values from the layer-stack of vegetation indices intersecting the GEDI footprints, training the model to predict AGB and applying the model to the entire image stack. A subset of 11,452 footprints were used in this analysis, where 70% of the data was used for training and 30% to validate the model. The results show models with a R2 = 0.69 and RMSE% = 33.5 %, and an observed underestimation of AGB >200 Mg/ha. This suggests the need for incorporating additional metrics to capture the full range of AGB that could reach over 300-400 Mg/ha in the study domain. Ongoing analysis will include and evaluate phenology-specific temporal features derived from HLS time series and Sentinel-1 vegetation indices to enhance predictions and explore seasonal and annual phenological cycles. We also use a Landsat-based historical land cover classification dataset to explore the influence of vegetation age on AGB variability. These efforts aim to take advantage of HLS time series and muti-modal imagery data fusion with GEDI to enhance monitoring and management of tropical forest ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Leveraging the temporal benefits of Harmonized Landsat and Sentinel-2 (HLS) data for modeling fine-scale land cover and land use change in complex landscapes

Authors: Margaret Wooten, Jordan Caraballo-Vega, Molly Brown, Konrad Wessels, Mark Carroll, Minh Tri Li, Aziz Diouf, Modou Mbaye, Christopher Neigh
Affiliations: NASA GSFC, University of Maryland College Park, George Mason University, Centre de Suivi Ecologique, Senegalese Agricultural Research Institute
In sub-Saharan West Africa, accelerating population growth and worsening effects of climate change are further straining natural resources and threatening smallholder agricultural productivity. As such, understanding the spatial and temporal dynamics of land cover and land use (LCLU) changes is vital for the majority of people who rely heavily on rainfed subsistence agriculture to support their livelihoods. However, this region is characterized by a mosaic of small, irregularly defined agricultural fields and grasslands, interspersed with sparse pockets of savannah woodlands and individual tree stands, making it notoriously difficult to monitor with traditional remote sensing approaches and moderate to coarse resolution satellite data. Moreover, extreme variations in phenology, significant burnt area during the dry season, and a scarcity of cloud-free data during the rainy season present additional obstacles. These challenges complicate LCLU modeling in this region, especially for land use classes that are difficult to differentiate from one another without consistent cloud-free observations during the growing season (e.g. cultivated crop or fallow field). To address these challenges, we developed a near-autonomous spatiotemporal data fusion framework that combines objects derived in an unsupervised segmentation from commercial very-high resolution (VHR) multispectral satellite data with temporal patterns obtained from coarser spatial resolution data and their derived vegetation indices (VIs). Our workflow offers flexibility for the specification of the underlying time series data, provided this data is represented at the necessary temporal interval (e.g. monthly) and at an adequate spatial resolution. VIs derived from optical satellite imagery have long been used for LCLU mapping, but the significant presence of clouds and the spatial and temporal resolution trade-offs inherent in existing global multispectral satellite constellations (e.g. MODIS, Landsat, Sentinel-2) has traditionally hindered our ability to obtain reliably cloud-free observations at the spatial and temporal scales necessary for skillfully predicting land cover and land use classes within our study domain. In response, we have typically relied on Sentinel-1’s cloud penetrating Synthetic Aperture Radar (SAR) satellite data to provide the predictive temporal data for our model. But thanks to a joint initiative between the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS) to produce a seamless surface reflectance product from NASA’s Landsat and the European Space Agency’s Sentinel-2 satellites (Harmonized Landsat and Sentinel-2; HLS), we are now able to derive VIs at a sufficiently high temporal resolution (every 1.5 to 3 days on average) without sacrificing the spatial detail necessary for resolving these fine-scale LCLU classes and their changes. Here we present the results of this workflow applied in Senegal, where we identify LCLU classes such as agroforestry, cultivated and fallow agriculture, urban area, and dense or degraded forests using a multivariate One-Dimensional Convolutional Neural Network (1DCNN) model, fueled by a combination of single-date VHR multispectral imagery from Maxar’s WorldView constellation and time-series data from HLS and SAR. Independent validation on the initial model results, substantiated by in-situ observations collected on a recent field campaign to Senegal, reveals an overall classification accuracy greater than 75%. The science output from this model includes a spatiotemporal land use database that can be used for LCLU change detection and subsequent efforts for guiding informed policy and land management decisions. Our approach highlights the usefulness of multi-modal data fusion strategies and the inter-mission data integration efforts of the HLS project for addressing important societal challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Continuous Change Detection and Classification using NASA’s Harmonized Landsat and Sentinel-2 (HLS) Data in Google Earth Engine

Authors: Thuy Trang Vo, Junchang Ju, Qiang Zhou, Bradley Baker, Brian Freitag, Pontus Olofsson, Christopher Neigh, Madhu Sridhar
Affiliations: University of Alabama in Huntsville, Earth System Science Interdisciplinary Center, University of Maryland, Science Systems and Applications, Inc (SSAI), University of Alabama in Huntsville, NASA Marshall Space Flight Center, NASA Marshall Space Flight Center, Earth System Science Interdisciplinary Center, University of Maryland
NASA’s Harmonized Landsat and Sentinel-2 (HLS) global surface reflectance products are generated by combining input data from OLI and MSI sensors aboard NASA/USGS’s Landsat 8/9 and ESA’s Sentinel-2A/B satellites, respectively. The analysis-ready HLS dataset is produced at a medium spatial resolution of 30m with a near-global coverage enabling land observation every 2-3 days. The production of harmonized surface reflectance on a common MGRS grid involves several processing steps, including atmospheric correction of Top of Atmosphere (TOA) data, cloud masking, normalizing bi-directional view angle effects and bandpass adjustment to account for sensor level differences with OLI as the reference. The dataset has undergone rigorous validation and consistency evaluation. The data harmonization is found to be efficacious and therefore the data products are suitable for quantitative analyses. Compared to the revisit times of individual constituent satellites, the HLS virtual constellation dataset offers significantly higher observational frequency. The HLS data archive exceeds 4 PB (and ~30M products) and extends nearly a decade (HLS Landsat component L30: April 2013 onwards, HLS Sentinel-2 component S30: Nov. 2015 onwards). This rich dataset is useful for many applications such as disaster response and vegetation monitoring. In particular, availability of highly frequent surface reflectance greatly benefits time series based analysis in uncovering seasonality and long-term trends. For geospatial analysis with large datasets and at global scales, Google Earth Engine (GEE) has emerged as a powerful platform which removes barriers to users by offering convenient tools and computing resources. HLS L30 data products are available on GEE and as of December 2024, HLS S30 data is being actively ingested. The goal of this study is to demonstrate the benefits of HLS data series compared to Landsat or Sentinel-2 alone data stacks by using the Continuous Change Detection and Classification algorithm available on GEE. The study will focus on a few key applications and highlight the ease of use at different scales by providing examples of pixel-based time series and spatial visualizations. These analyses can be further extended to other land cover applications to derive useful insights by leveraging benefits of HLS dataset and GEE.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.06.01 - POSTER - Geospace dynamics: modelling, coupling and Space Weather

This session aims to capture novel scientific research outcomes in the Geospace dynamics field, encompassing atmosphere, ionosphere, thermosphere, and magnetosphere - modelling and coupling. A significant contribution is expected from Space Weather science with the usage of, but not limited to, data of ESA Earth Observation missions, such as Swarm, in particular FAST data, and SMOS. The objective of the session is to collect recent findings that improve the knowledge and understanding of the dynamics and coupling mechanisms of the middle and upper atmosphere and their link with the outer regions that are mainly driven by the Sun and the solar cycle, as well as a focus on data validation and on Space Weather events. We solicit results also from simulations, ground-based observatories or other heliophysics missions, in particular demonstrating synergetic combinations of these elements.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Swarm – SMOS synergies for Space Weather events monitoring

Authors: Roberta Forte, Raffaele Crapolicchio, Enkelejda Qamili, Vincenzo Panebianco, Dr. Lorenzo Trenchi, Federica Guarnaccia, Veronica Gonzalez Gambau, Dr. Nuria Duffo
Affiliations: Serco For Esa, CSIC Institute of Marine Science, Universitat Politecnica de Catalunya
ESA Earth Explorers missions pioneer new space technology and observe our planet to help answer key science questions about Earth’s systems, and in some cases, they can go behind the original scientific purpose and be beneficial to other field of science. Moreover, they allow to develop synergies which open to further applications in different fields. An example of a fruitful synergy between two completely different missions, that fosters new objectives behind their original ones, is Swarm and SMOS. Both these Earth Explorer missions may be advantageous for Space Weather applications: their distinctive characteristic is the possibility to observe Space Weather phenomena from different points of view. SMOS mission is dedicated to Soil moisture and salinity measurements, but within these measurements, the on-board Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) captures a signal from the Sun, that allows to derive the Solar Flux in L-band, with its polarization component. Swarm original purpose is to characterize Earth’s geomagnetic, ionospheric and electric fields and their temporal variation, through measurements of Earth’s magnetic field and plasma parameters with a peculiar constellation configuration of 3 satellites. With its new “Fast” processing chain, Swarm is able to provide data with a minimum delay with respect to acquisition time, making this mission eligible for Space Weather application. Moreover, both Swarm and SMOS provide measurements of Total Vertical Electron Content (VTEC), very useful to evaluate impact of Space Weather phenomena on the ionosphere. This poster aims at demonstrating how these two missions can enhance their contributions to Space Weather by combining their distinct observations, revealing new possible applications. Some examples of this collaboration will be presented as results of the analysis of the same events observed by SMOS and Swarm, with a focus on high-impact events of Solar Cycle #25, involving several parameters: the Solar flux in L-band and its circular polarization component measured by SMOS; the detection of Solar Radio burst from SMOS solar flux variation, compared with GNSS effects and radio blackouts on the ground; variations in geomagnetic field, plasma density, plasma temperature and Field Aligned Currents measured by Swarm; VTEC measurements from both missions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Escape of ions from Earth under different magnetospheric conditions

Authors: Kristina Kislyakova, Yury Sasunov, Yanina Metodieva, Colin
Affiliations: University Of Vienna
Atmospheric loss processes together with sources and sinks at the surface govern the evolution of the atmospheric composition. At present-day Earth, the main dominant escape process is polar wind, which predominantly removes ionized oxygen atoms from the polar regions of the Earth. Although a multitude of observations that cover atmospheric escape for different activity conditions of the Sun exist, theoretical and numerical aspects of the polar outflow are still not entirely understood. In this work, we investigate the role different magnetospheric conditions play in governing the polar wind escape rates from the Earth. We use the Space Weather Modeling Framework and the BATS-R-US code to determine the magnetospheric structure in the polar areas of the Earth for quiet and storm conditions. The code output includes the configuration of the magnetic field in the vicinity of an exoplanet (using the Solar Corona and Inner Heliosphere modules) for a given stellar magnetic field and plasma parameters in the vicinity of the planet. The code offers significant flexibility and allows to study a wide range of quiet and storm conditions. Using the magnetic and electric fields distributions calculated with the SWMF, we apply the test particle approach to track individual ions along the magnetic field lines and collect static on atmospheric ions that are lost. Depending on their energy, cold ions can end up in different regions of the magnetosphere, such as the magnetopause, the distant tail, and the ring currents, or fall down to the atmosphere. The idea of the test particle approach is to numerically calculate the trajectory of independent and non-interacting charged/uncharged particles, where external forces are well known. Particularly, for applications of the test particle approach for planetary magnetospheres it is common to use the magnetic and electric fields from global models uch as the SWMF. To obtain a general picture of the percentage of particles that escape, we will study multiple test particles with different parameters such as initial energies, locations, and pitch angles (that can be inferred from the DSMC model) to accumulate statistics. As a result, we will obtain the distribution of locations, speeds and final destinations of ions in magnetospheres and/or ionospheres of planets. One of the main advantages of the test particle approach is that it avoids very expensive calculations (in terms of computational time and computer resources) and at the same time can reproduce the main features of the studied phenomena. We show that magnetospheric parameters together with the current solar conditions play an important role for atmospheric escape. We discuss the influence of atmospheric loss processes on the Earth’s atmosphere over it’s history, and discuss the importance of preexisting modeling for stellar missions such as the SMILE satellite (Solar wind Magnetosphere Ionosphere Link Explorer).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: AGATA (Antarctic Geospace and ATmosphere reseArch): the new SCAR Scientific Research Programme and its mentoring activities

Authors: Jaroslav Urbar, Lucilla Alfonsi, Wojciech Jacek Miloch, Nicolas Bergeot, Eduardo Perez Macho, Yamila Melendi, Trinidad Duran, Carlos Castillo-Rivera, Marayén Renata Canales Riquelme, Reetambhara Dutta, Satyajit Singh Saini, Simon Bouriat, Anoruo Chukwuma
Affiliations: Institute of Atmospheric Physics CAS, Istituto Nazionale di Geofisica e Vulcanologia, University of Oslo, Royal observatory of Belgium, Mackenzie Center for Radio Astronomy and Astrophysics, Departamento de Física - UNS, Universidad de Concepcion, Indian Institute of Technology, IPAG - Institut de Planétologie et d'Astrophysique de Grenoble, University of Nigeria
The AGATA is a new Scientific Research Programme (SRP) endorsed by SCAR starting its activities from January 2025. During its 8-years lifetime AGATA aims to significantly advance the current knowledge of the Antarctic atmosphere and geospace, in the bipolar, interhemispheric context. AGATA contributes to answering the outstanding scientific questions related to the whole-atmosphere interactions, including coupling between atmospheric layers and between the neutral and ionized parts of the atmosphere, space weather and magnetospheric influences, and the whole atmosphere’s role in climate variations. These questions are addressed with a multi-disciplinary and multi-instruments approach, and by bringing together communities which study the polar atmosphere and geospace. Scientists who need atmospheric corrections for their measurements are also involved. AGATA SRP takes advantage of existing and planned instrumentation in Antarctica, and aims for coordinated research efforts and data exchange. To understand the global context, AGATA SRP is also set in the interhemispheric perspective. While the understanding of physics of the neutral and ionized atmosphere has been significantly improved using both ground-based and space-based measurements, the questions that remain open need to be addressed with a synergistic approach. This requires active involvement of various research groups in the field. AGATA contributes to answering the outstanding scientific questions within atmospheric physics and aeronomy in the Antarctic, namely: 1. How are different atmospheric layers coupled in the Antarctic? 2. How does the Antarctic upper polar atmosphere respond to increased geomagnetic activity, including energy transfer from space? 3. How does the whole polar atmosphere impact short- and long-term climate variations? Answering these open questions has not only implications for the understanding of processes in the Antarctic atmosphere, but also greatly improves our understanding of the atmospheric dynamics in the polar regions and globally, thus contributing to the development of large-scale whole atmosphere and climate models. AGATA is an inclusive and interdisciplinary programme, with a strong participation of early career researchers (ECRs) and emphasis on inclusiveness and gender balance. AGATA encourages the interdisciplinary approach and seeks to: ● Foster the collaboration among experts of different disciplines, such as astrophysics, planetary science, neutral atmosphere physics and chemistry, and heliophysics to share the competencies necessary to understand the role of different drivers of atmospheric and ionospheric dynamics from above and below; ● Strengthen the collaboration between atmospheric scientists and space physics community to improve our knowledge about space weather forecasting and space weather impacts; ● Facilitate sharing of data, algorithms and models to harmonize the exploitation of the information (adoption of standards, agreement on metrics, use of shared communication tools, use of interoperable tools, etc.); ● Develop and strengthen the collaboration between the research communities that manage and exploit ground-based and in-situ observations to optimize and maximize their efforts given an increasing number of multi-instrument sites on the ground and multi-sensor payloads in space. AGATA is already engaged to contribute to the identification of priorities in polar atmospheric and space weather research that should be achieved during the 5th IPY (2032-2033). AGATA is thus gathering contributions and expertise from a significant part of the scientific community dealing with the physics from lower to upper atmosphere and geospace. In this framework the next generation of scientists is engaged in the roadmap of the next IPY activities. Already long before its official endorsement AGATA started mentoring programme for ECRs and also supported them to attend SCAR Open Science Conference 2024. These teams have been working on ECRs-led manuscripts dealing with the understanding of atmospheric couplings by pathfinding approaches in studies of long-term trends as well as truly multi-instrumental studies of Mother’s 2024 superstorm over Antarctica.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SPACE IT UP Project (Spoke 6): Aeronomic Parameters Retrieved at Middle Latitudes With the THERION Method for Space Weather Studies

Authors: Dario Sabbagh, Loredana Perrone, Dr. Alessandro Ippolito, Carlo Scotto, Luca Spogli
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia
SPACE IT UP is a program aiming at enhancing the space technology of Italy to be used for space exploration and exploitation for the benefit of planet Earth and the entire humankind. Our study is inserted in the Spoke 6, whose main objective is aimed at protecting critical infrastructures from Space Weather (SWE) events by fostering research and tools that can potentially be translated into future operational services. Specifically, in this task we will study the Thermosphere-Ionosphere system in response to adverse SWE conditions at regional scale. For this purpose, an original method, THERION (THERmospheric parameters from IONosonde observations), has been used to retrieve a consistent set of aeronomic parameters under disturbed geomagnetic conditions. The method is based on observed bottom-side Ne(h) profile in the F region and, when available, on satellite (Swarm, GRACE) neutral gas density observations, being applicable at noon time hours at middle latitudes under any level of solar geomagnetic activity. The retrieved aeronomic parameters will be compared with empirical thermospheric empirical models like MSISE00, showing increased ability of THERION in reproducing thermospheric variability in such conditions. This study is carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Short-term (1-24) hour foF2 and MUF(3000) prediction and the state of the thermosphere over Europe during the great geomagnetic storm in May 2024

Authors: Loredana Perrone, Andrey Mikhailov, Paolo Bagiacchi, Dario
Affiliations: ISTITUTO NAZIONALE DI GEOFISICA E VULCANOLOGIA
The MUF(3000) predicted 1- 24 hours ahead is one of the operational space weather product inserted in PECASUS, one of the three global Space Weather Centers for aviation space weather user services designed by the International Civil Aviation Organization(ICAO) and in the SWESNET project (ESA, Space Weather Awareness- https://swe.ssa.esa.int/). MUF(3000) depends on two ionospheric parameters foF2 and M(3000): for the foF2 the forecasting model EUROMAP and for M(3000) the IRI model are used. The method has been applied to Europe where there are ionospheric stations with long (for some solar cycles) historical data and current real-time foF2 observations. The method includes two types of prediction models: regression models based on the analyses of historical observations, and training models based on current foF2 observations A mapping procedure applied to the European stations provides MUF(3000) short-term prediction over the whole area. The application of these methods and the comparison with IRI-storm model for the storm event on May 10 2024 with kp up to 9 and Dst down to -403 nT are discussed. Thermospheric parameters retrieved from ground-based ionosonde and Swarm neutral density observations obtained for the storm period are compared to modern empirical thermospheric models. This study is carried out within the Space It Up project funded by the Italian Space Agency, ASI, and the Ministry of University and Research, MUR, under contract n. 2024-5-E.0 - CUP n. I53D24000060005."
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unexpected Field-Aligned Structure in Equatorial Plasma Bubbles

Authors: David Knudsen, Bizuayehu Addisie Beyene
Affiliations: University Of Calgary
Equatorial plasma bubbles (EPBs) are deep density depletions that tend to be elongated in the meridional direction, i.e. along the geomagnetic field in the equatorial ionosphere. This study compares the distribution of bubble dimensions across B, as seen by the C/NOFS satellite, and approximately along B, with Swarm. Whereas one expects the Swarm distribution to reflect much longer bubble lengths than C/NOFS, the observations do not bear this out: Swarm observes more "short" bubbles than expected. This surprising finding suggests that EPBs, which can in fact been seen with ground-based cameras to be elongated along B, may be comprised of smaller segments, indicating a previously-unknown field-aligned density structuring mechanism.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards a physically constrained empirical model of climatological variations of ionospheric F-region magnetic field and electric currents

Authors: Martin Fillion, Gauthier Hulot, Patrick Alken
Affiliations: Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO, USA, NOAA National Centers for Environmental Information, Boulder, CO, USA, Université Paris Cité, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France
The Earth’s ionosphere hosts a complex electric current system that generates a magnetic field, referred to as the ionospheric field. The study of ionospheric electric currents and fields provides crucial insights on the ionosphere-thermosphere system and on ionospheric plasma distribution and dynamics. A particularly valuable dataset to study these currents and fields comes from magnetic measurements acquired by magnetometers onboard low Earth Orbit (LEO) satellites, such as those of the ESA Earth Explorer Swarm constellation. These satellites orbit within the ionospheric F region and provide highly valuable in situ measurements. These data are already much used to recover and study the signals produced by the Earth’s outer core, the lithosphere, the oceans, the magnetosphere and the currents induced by the time-varying ionospheric and magnetospheric fields. This requires sophisticated empirical models. Building data-based models of the highly dynamic and spatially complex F-region ionospheric field and associated electrical currents, however, is a challenge of its own. The complex spatio-temporal nature of the signals makes the parameterization of the problem difficult to handle, with the data not providing enough information to uniquely constrain the model. This issue is usually addressed by introducing simplifying assumptions on the space-time variations, and by restricting the model to describe the field and currents within the regions sampled by the satellites (Fillion et al., 2023). Recent research has nevertheless demonstrated that additional progress could be made by relying on optimized spatial basis functions using numerical simulations from realistic physics-based models, such as the Thermosphere-Ionosphere-Electrodynamics General Circulation Model (Alken et al., 2017; Egbert et al., 2021). Such an approach has many advantages, not the least the possibility of building a model describing the field and electrical currents beyond the regions directly sampled by the data. In this presentation, we will present our ongoing efforts toward using such an approach to build a data-based model of climatological variations of ionospheric F-region magnetic fields and electric currents. Preliminary results will be presented and possible avenues for future improvements discussed. Alken, P., Maute, A., Richmond, A. D., Vanhamäki, H., & Egbert, G. D. (2017). An application of principal component analysis to the interpretation of ionospheric current systems: TIEGCM MODELING, PCA, AND DATA FITTING. Journal of Geophysical Research: Space Physics, 122(5), 5687–5708. https://doi.org/10.1002/2017JA024051 Egbert, G. D., Alken, P., Maute, A., & Zhang, H. (2021). Modelling diurnal variation magnetic fields due to ionospheric currents. Geophysical Journal International, 225(2), 1086–1109. https://doi.org/10.1093/gji/ggaa533 Fillion, M., Hulot, G., Alken, P., & Chulliat, A. (2023). Modeling the Climatology of Low- and Mid-Latitude F-Region Ionospheric Currents Using the Swarm Constellation. Journal of Geophysical Research: Space Physics, 128(5), e2023JA031344. https://doi.org/10.1029/2023JA031344
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ionospheric Occurrence of Pc1/EMIC Waves relative to the Ionospheric Footprint of the Plasmapause

Authors: Tamás Bozóki, Balázs Heilig
Affiliations: HUN-REN Institute of Earth Physics and Space Science, ELTE, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group
Pc1 pulsations cover the 0.2–5 Hz frequency range with electromagnetic ion cyclotron (EMIC) waves of magnetospheric origin being generally accepted as their most important source. In the ionosphere, the initially transverse EMIC waves can couple to the compressional mode and propagate long distances in the ionospheric waveguide. By studying the Pc1 waves frequency range in the topside ionosphere, we can obtain information on the spatial distribution of both the transverse (incident EMIC) and the compressional waves. We made use of our new Swarm L2 product developed for characterising Pc1 waves to explore the spatial distribution of these waves relative to the midlatitude ionospheric trough (MIT), which corresponds to the ionospheric footprint of the plasmapause (PP) at night. It is shown that the vast majority of Pc1 events are located inside the plasmasphere and that the spatial distributions clearly follow changes in the MIT/PP position at all levels of geomagnetic activity. The number of transverse Pc1 (incident EMIC) waves rapidly decreases outside the PP, while their occurrence peak is located considerably equatorward of the PP footprint, i.e. inside the plasmasphere. On the other hand, the compressional Pc1 waves can propagate in the ionosphere outside the PP towards the poles, while in the equatorial direction there is a secondary maximum in their spatial distribution at low magnetic latitudes. Our results suggest that mode conversion taking place at the PP plays a crucial role in the formation of the presented spatial distributions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating Mid-Latitude Ionospheric Disturbances at the Ionospheric Observatory of Rome During Solar Minima

Authors: Dario Sabbagh, Loredana
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia
This study examines the mid-latitude ionospheric disturbances over the Ionospheric Observatory of Rome (41.82° N, 12.51° E) during the last two solar minima. The goal is to improve our understanding of their relationship with different sources, including geospheric storms, as strong manifestations of Space Weather. Ionospheric F2 layer disturbances are analyzed by studying strong positive and negative deviations of its critical frequency foF2, which corresponds to the maximum electron density in the vertical profile. Short-lived anomalies (2-3 hours) and Long-lasting ones (≥4 hours) are identified using hourly observations over a background defined by a 27-day running median for each hour, and binned according to the geomagnetic activity, hour and season of their occurrence. Hourly Total Electron Content (TEC) data from a GNSS receiver at the same location of the ionosonde are similarly processed after calibration and conversion to vertical TEC (vTEC). The Interquartile Range method is applied to detect anomalous values with the same running windows, enabling a direct comparison between simultaneous measurements at the ionospheric peak altitude given by the ionosonde, and vertically integrated ones by the co-located GNSS receiver. The results reveal that significantly fewer anomalies are detected in vTEC compared to foF2, although the total number of each type is similar across the two solar minima. Positive anomalies dominate each year and are always most prevalent where the distributions according to the geomagnetic activity are more pronounced. A particularly small number of negative anomalies is confirmed also for foF2 during daytime, while those occurred during night were more frequent in Summer. Seasonal and hourly patterns show more pronounced differences for positive anomalies, particularly those with long persistence.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Conjugate Processes in the Magnetosphere and the Subauroral Ionosphere

Authors: Máté Tomasik, Balázs Heilig
Affiliations: HUN-REN Institute of Earth Physics and Space Science, HUN-REN – ELTE Space Research Group, Eötvös Loránd University, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group
The Plasma Boundary Layer (PBL) is a rich repository of various dynamic processes. PBL is the boundary separating the relatively dense cold plasma of the plasmasphere co-rotating with the Earth, and the tenuous plasma trough. While the Ring Current overlaps with the plasmasphere, energetic particle precipitation takes place outside the PBL. However, the PBL separates diverse plasma populations and different plasma wave modes, providing a reflection boundary for compressional ULF waves and dominated by electric fields of different origins. Various cold plasma structures are formed by the interplay of these electric fields. The sharp storm-time night side plasmapause is shaped under the joint effect of the corotating electric field, the global convection and the sub-auroral polarisation electric field active during geomagnetic substorms. The corresponding structure in the ionosphere is the mid-latitude ionospheric trough. This paper investigates the relationship between these conjugate structures in detail utilising RBSP/Arase and Swarm observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: On the synergies between ground-based VLF/LF measurements and SWARM data: application to the study of seismic precursors

Authors: Olimpia Masci, Mr Mohammed Y. Boudjada, Mr. Hans Ulrich Eichelberger, Ms. Aleksandra Nina, Mr. Pier Francesco Biagi, Mr. Patrick H.M. Galopeau, Mr. Mohammad Azem Khan, Ms. Olimpia Masci, Ms. Maria Solovieva, Mr. Michael Contadakis, Mr. Helmut Lammer, Mr. Wolfgang Voller, Mr. Manfred Stachel, Mr. Bruno P. Besser, Ms. Iren-Adelina Moldovan, Mr. Konstantinos Katzis
Affiliations: Institute for Applied Mathematics (IAC), National Research Council of Italy (CNR), Space Research Institute, Austrian Academy of Sciences,, Institute of Physics Belgrade, University of Belgrade, Department of Physics, University of Bari, Laboratoire Atmosphere, Milieux, Observations Spatiales–Centre National de la Recherche Scientifique, UVSQ Université Paris-Saclay, EOG GmbH, DIAN Srl, Institute of the Earth Physics, Russian Academy of Sciences, Department of Geodesy and Surveying, Aristotle University of Thessaloniki, National Institute for Earth Physics (NIEP), Faculty of the Computer Science and Engineering, European University
Large earthquakes trigger pre-seismic electromagnetic (EM) waves that propagate from the lithosphere to the ionosphere, above the epicenter region. Those waves exhibit modulations generated by the acoustic-waves (AWs), atmospheric gravity waves (AGWs) and planetary waves (PWs) with periods ranging from 1 minute to days. Such waves disturb a huge ionospheric space area considered to be equal to the preparation earthquake (EQ) zone, derived from the Dobrovolsky’s relationship, with a radius (Rdb) equal to Rdb=100.43M where Rdb is expressed in km and M is the magnitude of the EQ. In this analysis, we report on the study of electromagnetic precursors based on the use of LF (30-300 kHz) and VLF (3 – 30 kHz) radio transmitter signals detected by ground-based receivers. Those observations are combined to space measurements onboard LEO satellites, like ESA’S Swarm mission and the China Seismo-Electromagnetic Satellite (CSES). Ground-based VLF/LF observations are daily monitored by two different and complementary networks. The first one is the International Network for Frontier Research on Earthquake Precursors (INFREP), established in 2009 with eight sensors located in Austria, Cyprus, Greece, Italy, Romania and Serbia. The radio receivers measure the intensity (electric field strength) of radio signals radiated by existing VLF-LF broadcasting stations in the bands VLF (20 - 80 kHz) and LF (150 - 300 kHz), with 1-minute sampling rate. The INFREP cooperation started in 2009 and a huge database of amplitude measurements of electric field has been investigated and used to study perturbations in the ionosphere due to external activity (i.e., solar and geomagnetic activities), and to detect EQs EM precursors with a magnitude Mw>6.0 [1]. More recently, the deployment of a new VLF/LF network has been started which currently consists of four reception stations deployed in Graz (Austria), Guyancourt (France), Réunion (France) and Moratuwa (Sri Lanka) [2]. Satellite missions, such as CSES and ESA’s Swarm can provide magnetic field measurements at low altitude allowing the detection of seismic precursors (e.g. [3] and [4]). The aim of this work is to analyse EQ events with magnitude Mw > 6.0 which occurred in the southern part of Europe (Greece, Italy and Turkey). Hence Swarm satellite magnetic field and electron density measurements are combined to VLF/LF electric field ground observations. We emphasize on the analysis of the EQ that happened in Antakya (Turkey) on 6th February 2023, Mw = 7.8 [5]. Time series of the Dst, AE, Kp, and ap geomagnetic indices and GEOS satellite observations are also considered to distinguish and separate lithospheric precursors and external effects, like solar and geomagnetic activities. The main issue is to make evident the lithospheric-induced disturbances in the ionosphere and to confirm, or not, a clear correlation between the ground electric field observations and the satellite space magnetic field measurements. References: [1] P.F. Biagi, R. Colella, L. Schiavulli, A. Ermini, M. Boudjada, H. Eichelberger, K. Schwingenschuh, K. Katzis, M. Contadakis, C. Skeberis, I.A. Moldovan, M. Bezzeghoud, “The INFREP Network: Present Situation and Recent Results”, Open Journal of Earthquake Research, 8, 101–115, 2019. [2] P.H.M. Galopeau, A.S. Maxworth, M.Y. Boudjada, H.U. Eichelberger, M. Meftah, P.F. Biagi, K.nSchwingenschuh, “A VLF/LF facility network for preseismic electromagnetic investigations”, Geosci. Instrum. Method. Data Syst., 12, 231–237, 2023. [3] A. De Santis, D. Marchetti, L. Spogli, G. Cianchini, F.J. Pavón-Carrasco, G. De Franceschi, R. Di Giovambattista, L. Perrone, E. Qamili, C. Cesaroni, A. De Santis, A. Ippolito, A. Piscini, S.A. Campuzano, D. Sabbagh, L. Amoruso, M. Carbone, F. Santoro, C. Abbattista, D. Drimaco, “Magnetic Field and Electron Density Data Analysis from Swarm Satellites Searching for Ionospheric Effects by Great Earthquakes: 12 Case Studies from 2014 to 2016”, Atmosphere, 10, 371, 2019. [4] M. Akhoondzadeh, A. De Santis, D. Marchetti, X. Shen, “Swarm-TEC Satellite Measurements as a Potential Earthquake Precursor Together with Other Swarm and CSES Data: The Case of Mw7.6 2019 Papua New Guinea Seismic Event”, Frontiers Earth Sciences 10, 820189, 2022. [5] M.Y. Boudjada, P.F. Biagi, H.U. Eichelberger, G. Nico, K. Schwingenschuh, P.H.M. Galopeau, M. Solovieva, M. Contadakis, V. Denisenko, H. Lammer, W. Voller, F. Giner, “Unusual Sunrise and Sunset Terminator Variations in the Behavior of Sub-Ionospheric VLF Phase and Amplitude Signals Prior to the Mw7. 8 Turkey Syria Earthquake of 6 February 2023”, Remote Sensing, 16, 23, 4448, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Cosmic ray measurements and solar modulation with HEPD-01 on board CSES-01

Authors: Matteo Sorbara, Matteo Martucci
Affiliations: Università Degli Studi Di Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133, INFN Sezione Roma Tor Vergata, Via della Ricerca Scientifica 1, 00133
The China Seismo-Electromagnetic Satellite (CSES-01) is a mission developed by the Chinese National Space Administration (CNSA) together with the Italian Space Agency (ASI), to investigate the near-Earth electromagnetic, plasma and particle environment. One of the main payloads on board the CSES-01 satellite is the High-Energy Particle Detector (HEPD-01), a light and compact detector designed and built by the Italian Limadou collaboration. This instrument is aimed to measure electron, proton and light nuclei fluxes in the energy range from 3 to 100 MeV for electrons and from 30 to 200 MeV for protons and light nuclei. The detector is made of a plastic scintillator trigger, a tower of 16 plastic scintillator planes and a matrix of LYSO crystals arranged in a 3 by 3 pattern, read by photomultiplier tubes with a custom DAQ electronics. The hardware provides good energy resolution and a wide angular acceptance (about 60 degrees), resulting in a high capability in particle identification and separation. Furthermore, its high stability in time makes HEPD-01 very well adapted in detecting variations of particle fluxes (even over long periods of time) related to a plethora of phenomena taking place on the Sun and in the inner Heliosphere. After six years of data-taking since the satellite launch, on February 2018, HEPD-01 has shown impressive ability in measuring various particle populations all over its orbit, like galactic cosmic rays. Moreover, a new CSES mission, with on board the new HEPD-02 detector, improved with respect the one currently orbiting, will be launched in 2025. This instrument will serve as a very reliable and accurate tool to continue the study of particle fluxes in the near-Earth space going towards the period of maximum activity of the solar cycle. In this work, an overview on cosmic protons and Helium nuclei measurements with HEPD-01 will be given, focusing on their energy spectra and their time variations, i.e. solar modulation and other small-scale periodicities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New release of the forecasting service SODA

Authors: Sandro Krauss, Mag. Dr.rer.nat. Manuela Temmer, Dipl.-Ing. BSc Andreas Strasser, Dr.rer.nat. MSc Florian Koller, Dipl.-Ing. BSc Ing. Barbara Süsser-Rechberger, BSc. MSc. Daniel Milosic
Affiliations: Graz University Of Technology, Institute Of Geodesy, University Graz, Institute of Physics, Queen Mary University of London, Space and Astrophysical Plasma Physics
With the strong rise of the current solar cycle 25, the number of solar eruptions such as solar flares and coronal mass ejections (CMEs) is also increasing. The SODA (Satellite Orbit DecAy) forecasting tool is currently based on an interdisciplinary analysis of space geodetic observations and in-situ solar wind measurements between 2002 and 2017. In this new release, we present an updated version of the service, which is part of ESA's Space Safety Programme (Ionospheric Weather I.161). We have analyzed an additional 7 years of data up to 2024 and incorporated the results into the forecast. This means that major storms, such as the Gannon Storm in May 2024, are now included in the forecast base. This geomagnetic storm, which occurred on 10 May 2024, was one of the most severe in decades. The storm was triggered by six CMEs hurled towards Earth by the giant sunspot AR3664. Due to the complexity of the event and an insufficient data base, the forecast with the previous release of SODA had its weaknesses. Other new features include the prediction of storm induced orbital decay for two new altitude layers (400 km and 450 km) and the expansion of new input parameters, so that the focus is no longer solely on the interplanetary magnetic field component Bz. Also included is the classification of the severity of the expected geomagnetic storm in the form of the National Oceanic and Atmospheric Administration (NOAA) Space Weather G-Scale. Finally, we will use our new thermospheric mass density processing chain, which is being applied to a wide range of satellites (e.g., CHAMP, GRACE, GRACE-FO, SWARM, TerraSAR-X) using accelerometer measurements or kinematic orbit information. Finally, a comparison between the old and new versions of the forecast service is presented for a selection of geomagnetic storms that have occurred over the last three solar cycles.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ionospheric Slab-Thickness modelling for Space Weather monitoring

Authors: M Mainul Hoque, Kateryna Lubyk, Marjolijn Adolfs, Norbert Jakowski
Affiliations: German Aerospace Center DLR
Space Weather refers to phenomena that arise from the connection between the Sun and Earth and can have adverse effects on the operation of technical systems and human activities. The Sun is rapidly approaching solar maximum currently, the technical systems are particularly vulnerable to rapid changes in the electron density distribution in the topside ionosphere and plasmasphere that can arise from space weather. Since ionospheric slab thickness is a measure of the shape of the electron density distribution, accurate modelling and monitoring of slab thickness can help prediction of space weather impact. Indeed, the profile shape reflects the complexity of production, loss and transportation of plasma in the Earth’s ionosphere and plasmasphere. A proxy Slab-Thickness for the topside ionosphere/plasmasphere is computed by dividing the topside vertical total electron content (TEC) data derived from GNSS navigation measurements by the in-situ electron density data from the Langmuir Probes (LP). The idea is very similar to the computation of equivalent slab-thickness dividing the ground TEC by the peak electron density measurements from vertical sounding or radio occultation data (see Jakowski and Hoque 2021). Single satellite measurements can be used; however, since Swarm-A and -C satellites are flying close-by both satellites data can be combined for improved products. The derived quantity will accurately provide a proxy measure of the topside ionosphere/plasmasphere (profile) thickness. From the long-term database of proxy Slab-Thickness a slab-thickness model can be developed. Existing ionosphere models (e.g., IRI, NeQuick, NEDM2020) will be benefitted using topside slab-thickness information. The data as well the model can be used to verify the properties of equivalent slab-thickness and ionosphere/plasmasphere coupling found by Jakowski and Hoque (2021, 2018). A bulge like increase of slab-thickness at around middle latitude (~40°) especially during night time is found which is first reported by Jakowski and Hoque (2021) in ground data. Many unanswered questions regarding ionosphere/plasmasphere coupling processes may be solved by simultaneously analysis of ground slab-thickness and topside proxy slab-thickness data. During space weather events the TEC and LP data will be changed in a nonlinear way and therefore the proxy slab-thickness value can be used a monitor for space weather events. References: Jakowski N, Hoque MM. 2018. A new electron density model of the plasmasphere for operational applications and services. J. Space Weather Space Clim. 8: A16 Jakowski N & Hoque MM 2021. Global equivalent slab thickness model of the Earth’s ionosphere. J. Space Weather Space Clim. 11, 10. https://doi.org/10.1051/swsc/2020083
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.02.07 - POSTER - FORUM- ESA's 9th Earth Explorer

The FORUM mission will improve the understanding of our climate system by supplying, for the first time, most of the spectral features of the far-infrared contribution to the Earth’s outgoing longwave radiation, particularly focusing on water vapour, cirrus cloud properties, and ice/snow surface emissivity. FORUM’s main payload is a Fourier transform spectrometer designed to provide a benchmark top-of-atmosphere emission spectrum in the 100 to 1600 cm-¹ (i.e. 6.25 to 100 µm) spectral region filling the observational gap in the far-infrared (100 to 667 cm-¹ i.e. from 15 to 100 µm), which has never been observed from space, spectrally resolved, and in its entirety. The focus of this session is on the scientific developments in the frame of this mission and the outlook into the future.

Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating water vapour using far infrared observations and simulations

Authors: Sophie Mosselmans, Helen Brindley, Dr Edward Gryspeerdt, Dr Caroline Cox, Dr Andreas Foth, Dr Tim Carlsen, Dr Robert David, Sanjeevani Panditharatne
Affiliations: Imperial College London, National Centre for Earth Observation, RAL Space, Leipzig University, University of Oslo
Accurately measuring the atmospheric state is crucial for climate change analysis and weather forecasting. In clear sky conditions, water vapour is responsible for over half of the Earth’s greenhouse effect. To better quantify water vapour variability and its influence on radiative forcing, we need both satellite and ground-based measurements with improved accuracy and vertical resolution. The Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission will investigate variations in upper tropospheric water vapour and its radiative signature in the far infrared. To prepare for this mission and gain an understanding of what it will deliver, Imperial College has developed a ground-based instrument, the Far-Infrared Spectrometer for Surface Emissivity (FINESSE) capable of measuring across the infrared spectrum (400 to 1600 cm-1) with high temporal resolution and accuracy. In early 2023, FINESSE measured clear sky downwelling radiation spectra during its first field campaign at the ALOMAR Observatory in Norway. Two aims of this campaign were to test the stability of FINESSE’s performance in the harsh operating conditions and to measure downwelling radiative spectra. In the cold and dry Arctic conditions, the far infrared “dirty window” between 400-600 cm-1 opens up allowing for the measurement of radiation emitted from higher in the atmosphere which is sensitive to water vapour concentrations at these altitudes. Measurements of downwelling radiance extending into the far-infrared in the Arctic are relatively rare. In principle, the observations may allow improved characterisation of the lower-mid tropospheric water vapour profile. A first step is to analyse how well existing representations or measurements of the water vapour profile map to the radiance observations using radiative transfer modelling. Here we perform this task using the Line By Line Radiative Transfer Model v12.13 (LBLRTM) in concert with temperature and water vapour profiles taken from three sources: 1. a local radiosonde launch; 2. a multichannel microwave radiometer (Humidity And Temperature Profiler - HATPRO) and 3. colocated data from the European Centre for Medium-Range Weather Forecasts Reanalysis v5 (ERA5). Our results show that none of the simulations using the different input sources match the observed radiances within measurement uncertainty, with a significant underestimate seen within the dirty window. The closest match is seen using the radiosonde profile as input. The radiosonde captures a humid layer which is not seen by either the HATPRO or ERA5. To assess uncertainties in ERA5, the ensemble members are used. The members are generated by introducing perturbations to the model’s initial conditions and how observations are incorporated and weighted. The standard deviation of the ensemble members' output radiance is used to approximate how the ERA5 reanalysis profile uncertainties propagate to its radiance. All of the measurement sources were in the same ERA5 grid box as FINESSE; however, the radiosonde did travel across to several grid boxes. To characterise the impact of the radiosonde movement on the radiance, a composite profile from ERA5 grid boxes was constructed. Another source of uncertainty that is explored, is the possible variations in the strength of the water vapour continuum used in the simulations. However, for this spectral region realistic perturbations of the continuum are not sufficiently large enough to fully reconcile FINESSE with the simulations. The differences in radiance between the different simulations and observations translate to radiative flux differences which are significant in the context of the Arctic surface energy budget. Our results imply that observations from instruments like FINESSE could provide additional information on water vapour vertical structure above and beyond what is currently available from reanalysis or commonly used microwave profilers, both in terms of vertical information and temporal sampling. We are currently working to infer the vertical temperature and water vapour profiles from FINESSE in collaboration with others on the FORUM team.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Modeling and Inversion of the Far-IR Spectral Radiances Measured by FIRMOS in Ground and Stratospheric Balloon Campaigns

Authors: Marco Ridolfi, Dr Marco Barucci, Dr Claudio Belotti, Dr. Giovanni Bianchini, Dr. Elisa Castelli, Dr Francesco D'Amato, Dr. Samuele Del Bianco, Dr Gianluca Di Natale, Dr. Bianca Maria Dinelli, Dr. Giuliano Liuzzi, Prof. Tiziano Maestri, Dr. Michele Martinazzo, Prof. Guido Masiello, Enzo Papandrea, Paolo Pettinari, Prof. Carmine Serio, Dr Silvia Viciani, Dr Luca Palchetti
Affiliations: CNR-INO, CNR-ISAC, CNR-IFAC, Dip di Ingegneria - Università della Basilicata, Dip. di Fisica e Astronomia "A. Righi" - Università di Bologna
FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) will be 9th Earth Explorer mission of the European Space Agency (ESA). Starting from 2027, FORUM will measure, from a polar orbiting satellite, the spectrum of the Earth’s Outgoing Longwave Radiation (OLR) in the interval from 100 to 1600 cm-¹ (that is from100 to 6.25 μm in wavelength). Together with the Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE), the FORUM mission will supply the first global, spectrally resolved, measurements covering the Far-InfraRed (FIR) range of the OLR spectrum. Measuring and monitoring the FIR region of the OLR is in fact crucial to understand the climate forcing / feedback effects exerted by clouds and water vapor content in the Upper-Troposphere / Lower-Stratosphere. In preparation to the FORUM mission, both the ESA and the Italian Space Agency (ASI) have started several projects to get the scientific community ready for the exploitation of the new measurements. In this context, at CNR-INO a Far-Infrared Radiation Mobile Observation System (FIRMOS) was designed and built with the support of ESA and ASI. FIRMOS is a Fourier Transform Spectrometer that can perform measurements both from ground and from stratospheric balloons. The characteristics of FIRMOS are very similar to those required for FORUM in terms of spectral range, resolution and Noise Equivalent Spectral Radiance. For this reason, FIRMOS measurements represent a very good basis to test the accuracy of radiative transfer models and connected ancillary data in reproducing the FIR OLR spectrum. Several forward / inverse models were developed or are simply used by the Italian scientific community interested in atmospheric FIR spectral measurements. Among these models, KLIMA (Kyoto protocoL Informed Management of Adaptation), SACR (Simultaneous Atmospheric and Cloud Retrieval), FARM (FAst Retrieval Model) and GBB-Nadir (Geofit Broad Band, Nadir version) are forward / retrieval algorithms, with different accuracy and speed characteristics, commonly used by our team to reproduce and analyze FIRMOS measurements. To date, with the support of ESA, FIRMOS has been deployed in several measurement campaigns. In 2019 the Zugspitze (2962 m asl) campaign from ground was carried out. In August 2022, the instrument was operated from a stratospheric balloon launched during the Strato-Science 2022 campaign from Timmins (Canada). In June 2024 FIRMOS operated again from a stratospheric balloon launched from the SSC facility in Kiruna (Sweden) within the TRANSAT 2024 campaign. A further ground-based campaign is planned from Ottawa (CA) in early 2025. In this work, we present the results of the analysis of the measurements collected by FIRMOS in these campaigns. The main objective of this analysis is the characterization of the accuracy of our models and of the ancillary databases in reproducing the measured FIR spectra. One of our codes (FARM) can also handle the joint inversion of matching measurements. Thus, if a measurement (either from satellite or ground) matching the FIRMOS one exists, we also test the synergistic inversion approach. Indeed, this is one of the techniques that will be applied to the matching measurements expected from FORUM and from the Infrared Atmospheric Sounding Interferometer – New Generation (IASI-NG) onboard of the MetOp-SG-A satellite.
LPS Website link: Modeling and Inversion of the Far-IR Spectral Radiances Measured by FIRMOS in Ground and Stratospheric Balloon Campaigns&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Determination of emissivity profiles using a Bayesian data-driven approach

Authors: Chiara Zugarini, Francesco Pio De Cosmo, Cristina Sgattoni, Luca Sgheri
Affiliations: University Of Florence, Institute for Applied Matemathics (IAC) - National Research Council (CNR), Institute of BioEconomy (IBE) - National Research Council (CNR)
This study addresses the critical challenge of accurately identifying surface emissivity profiles that align with experimental observations for specific geolocations and times. Accurate emissivity estimation is fundamental during radiative transfer retrieval processes, where the inherent coupling between emissivity and surface temperature can introduce significant biases in the retrieval of both parameters. The work focuses on methods to derive emissivity profiles that are consistent with observational data, serving as reliable initial guesses or a priori inputs for retrieval algorithms. These efforts are particularly relevant for the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, which will pioneer measuring the Earth's far-infrared spectral emission. The study evaluates two methodologies for determining emissivity profiles. The first method is an empirical method using Moderate Resolution Imaging Spectroradiometer (MODIS) and ancillary data. This approach integrates MODIS observations with ancillary datasets, including snow cover, surface temperature, and soil humidity, to infer plausible emissivity profiles without relying on predefined land cover classifications. The method generates a synthetic soil type map by associating Huang's emissivity profiles with the observed conditions. The performance is assessed by minimizing the root mean square error (RMSE) against MODIS emissivity data, showing that appropriately selected Huang profiles effectively reduce discrepancies concerning the selection of a constant initial guess. The second method is the Bayesian method, which leverages Combined Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and MODIS Emissivity for Land (CAMEL) and land cover data. This approach uses the CAMEL database and high-resolution land cover maps to derive emissivity profiles as convex combinations of Huang profiles. This method employs a Bayesian framework to incorporate information from the CAMEL database and the MODIS/Terra+Aqua Yearly Land Cover Type dataset, ensuring a statistically accurate selection of emissivity profiles. Key findings indicate that the Bayesian approach delivers superior performance compared to linear spline interpolation of CAMEL data when tested against experimental emissivity spectra retrieved from the Infrared Atmospheric Sounding Interferometer (IASI). Moreover, the second method performs even better than the full database from Huang. The results underscore the potential of this method to enhance the accuracy of surface parameter retrievals by providing accurate and computationally efficient initial estimates of emissivity profiles, thereby mitigating biases and improving the reliability of radiative transfer models. This development holds significant promise for the upcoming FORUM mission and broader Earth observation and climate modeling applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Development of the MetOp-SG Module (MSGM) for the ESA FORUM End-to-End Simulator

Authors: Giuliano Liuzzi, Prof. Tiziano Maestri, Prof. Guido Masiello, Dr. Michele Martinazzo, Dr. Luca Sgheri, Prof. Carmine Serio, Dr. Hilke Oetjen, Dr. Dulce Lajas
Affiliations: Department Of Engineering, University Of Basilicata, Department of Physics and Astronomy "Augusto Righi", University of Bologna, CNR-IAC, National Council of Research, ESA-ESTEC, European Space Agency
In this work we present the fundamental elements of the MetOp-SG Module (MSGM) of the FORUM End-To-End Simulator (FEES) developed for the European Space Agency, in preparation for FORUM, ESA’s 9th Earth Explorer (launch 2027), for flying in formation with MetOp-SG. This work constitutes also a basis for further, future applications to other sensors. The goal of the MetOp-SG Module (MSGM) is to simulate IASI-NG (Infrared Atmospheric Sounder Interferometer Next Generation) L1C data, following the format specified by EUMETSAT. This module slots into the existing Phase A/B1 FORUM end-to-end simulator (FEES A/B1), and for this reason, its structure is coherent with the interfaces already defined in FEES A/B1. The MSGM provides a set of functionalities which aim at calculating IASI-NG radiances corresponding to those observations taken in coincidence with FORUM. The synergy between the two instruments is in fact of fundamental importance to obtain full coverage of the Earth outgoing longwave spectrum in the whole infrared range, including the Far Infrared, which will be observed by FORUM for the very first time from satellite remote sensing. To achieve this, the MSGM is composed of three submodules: 1) the MetOp-SG Matching Module (MSGM-MM), which is responsible of the collocation of the IASI-NG fields of view with the FORUM observation; 2) the MetOp-SG Scene Generator (MSGM-SG), which has the task of producing the high spectral resolution radiances that reach the IASI-NG sensor; 3) the MetOp-SG Observation System Simulator (MSGM-OSS), which ingests the high-resolution spectra and applies the simulation of the Level 1 processor to get the L1C synthetic products. The software is developed in Matlab, C and Fortran 2003. The current version of the software, which relies on LBLRTM and LBLDIS radiative transfer models, includes several ancillary databases of optical properties of clouds and aerosols as well as a full emissivity database for the Mediterranean and Northern European region which is built upon the Huang emissivity global database. Special emphasis was placed on harmonising these databases to apply them across the full spectral range of FORUM+IASI-NG, in order to make them easily adaptable for further applications. In this work we present the full scheme of the software and its functionalities, showing how each submodule works and showcasing some sample results.
LPS Website link: Development of the MetOp-SG Module (MSGM) for the ESA FORUM End-to-End Simulator&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating the potential impact of future FORUM radiances through ensemble simulations

Authors: Alberto Ortolani, Cristina Sgattoni, Samantha Melani, Luca Rovai, Luca Fibbi, Marco Ridolfi, Ugo Cortesi, PhD Stefano della Fera
Affiliations: CNR-IBE, CNR-INO, CNR-IFAC, Consorzio LaMMA
In modern operational meteorology, satellite observations play a crucial role , providing consistent and comprehensive measurements of the Earth's atmosphere and surface on a global scale. More in detail, most of the data providing information on key meteorological variables, such as temperature, surface temperature, water vapor, and clouds, come from measurements of spectrally resolved Outgoing Longwave Radiation (OLR), which is the infrared radiation emitted by Earth at the Top Of the Atmosphere (TOA). These observations are integrated into models through Data Assimilation (DA) techniques to produce analysis products. This process combines irregularly distributed atmospheric observations with short-range model forecasts, effectively performing an optimal space-time interpolation onto a regular grid. The resulting gridded atmospheric states serve as inputs for numerical weather prediction, and also as resources for diagnostic studies, supporting the evaluation of the atmospheric and climate system's behaviour over time. Ensemble forecasting is a powerful approach in numerical weather prediction to give insights into the range of possible future states of the atmosphere. Instead of producing a single, deterministic (most likely) forecast, an ensemble of forecasts is generated to account for uncertainties in the prediction system. These uncertainties stem from two primary sources: errors in the initial conditions, which are amplified by the chaotic and non-linear nature of atmospheric dynamics, and errors in the model itself. The latter includes errors arising from approximations used in solving the governing equations, the use of parameterization schemes to represent unresolved sub-grid physical processes, as well as from the governing equations themselves, which are ultimately simplified representations of more complex processes. Ideally, the actual atmospheric state should fall within the predicted ensemble spread, with the spread magnitude reflecting the level of forecast uncertainty. At the initial forecast time, the ensemble spread should represent the uncertainty on the knowledge of the real atmospheric state even after its optimal reconstruction using the best available observational and modeling instruments. This process merges the widest possible set of global,heterogeneous observations, including spectrally resolved radiances with the physics of state-of-the-art models through advanced data assimilation procedures. Under this assumption (acknowledging that it is not always fulfilled), it is useful to generate an ensemble of synthetic atmospheric observations, corresponding to the ensemble model members, and to evaluate the spread of these synthetic observations. In fact, if these synthetic observations mimic measurements from an instrument expected to become operational in the near future and have a known associated error, the ensemble of synthetic observations can be used to estimate the potential impact of including such data with a proper assimilation procedure. This impact is indicated by the ratio between the observational error and the synthetic ensemble observation spread. When this ratio is approximately less than one, lower ratios suggest a greater potential for reducing the information uncertainty in the initial ensemble, and consequently, for narrowing the forecast spread at later times. The target observations in this study are radiances from ESA’s forthcoming 9th Earth Explorer (EE9) mission, FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring). FORUM will deliver unprecedented spectrally resolved radiance measurements in the far- and mid-infrared spectral range (100–1600 cm⁻¹) with 0.5 cm⁻¹ un-apodised resolution. This spectral range, which encompasses the bulk of the planet’s outgoing longwave radiation (OLR), is particularly sensitive to key climate variables, forcing and feedback, including temperature, water vapor (especially in the upper-troposphere) and cirrus clouds. The present work thus analyses the potential impact of FORUM measurements over an initial set of important atmospheric scenarios, based on ECMWF IFS ensemble atmospheric products, and σ-IASI/FORUM as radiative transfer model to generate the corresponding synthetic FORUM radiances. This work is partially funded by MC-FORUM project (Meteo and Climate exploitation of FORUM), a two-year initiative funded by the Italian Space Agency (ASI), it began in late Jan. 2024, aimed at developing new tools and expertise to exploit FORUM data in operational meteorology and climate studies. The study also benefits from the developments in EMM (Earth-Moon-Mars), a three-year project launched in January 2023 as part of Italy's National Recovery and Resilience Plan, which provided key competencies for this work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improvement of PTB’s vacuum FIR calibration system in support of ESA’s Mission FORUM

Authors: Daniela Narezo Guzman, Julian Gieseler, Max Reiniger, Dirk Fehse, Robert Häfner, Jamy Schumacher, Albert Adibekyan, Christian Monte
Affiliations: PTB
ESA’s 9th earth explorer mission FORUM aims to perform, for the first time, spectrally resolved, traceable measurements over extended time periods of Earth’s outgoing FIR radiation for wavelengths spanning from 6.25 µm to 100 µm. Until today, only spectral measurements up to 17 µm have been realized. However, about half of Earth’s outgoing total energy is found at wavelengths beyond 15 µm, making the FIR region crucial for determining Earth’s energy budget and hence climate development. FORUM aims to fill the data gap and with it provide valuable data for climate research, modeling, and prediction. PTB will support FORUM with a traceable pre-flight calibration of its on-board reference source under vacuum in the Reduced Background Calibration Facility 2 (RBCF2). An absolute radiometric uncertainty of 30 mK in radiation temperature required by FORUM demands an FIR laboratory reference source with an absolute uncertainty of 15 mK or less. This value is below that of currently available radiometric reference sources, not just for the FIR, but for the MIR and NIR as well. Based on a sensitivity analysis we have identified critical components and limiting specifications to meet this demanding uncertainty requirement. These are the temperature sensing of the blackbody cavity which must be realized with an uncertainty of less than 10 mK, the effective emissivity of the cavity which has to be above 0.999 and which also implies a temperature uniformity of the cavity of around 10 mK. Additionally, the background radiation must be considered and actively controlled. To meet these requirements, PTB developed a novel radiometric FIR calibration system consisting of an in-Vacuum Reference Blackbody (VRBB) in combination with a precisely temperature-controlled and uniform scenery or thermal shroud called Coldscreen (CS). The VRBB is a liquid-operated blackbody with a cylindrical cavity coated with Vantablack S-IR. It utilizes a novel temperature sensing scheme using capsule SPRTs immersed in a non-conducting liquid: the so-called in-liquid mounting. The CS design is based on a Finite Element Method optimized structure to meet a uniformity requirement of better than 1 K in the temperature range from -60 °C to 60 °C. It is used to precisely realize different background radiation environments and thus help correct background signals or to determine the effective emissivity of sources under test. With this calibration system radiometric uncertainties in-lab below 15 mK in the FIR region can be reached and reference sources and detectors from NIR to FIR can be calibrated with unprecedented small uncertainty. Our contribution will present the sensitivity analysis, the hardware design, and the characterization results of the radiometric FIR calibration system built in the RBCF2. These results will include: thermal uniformity of CS and VRBB, VRBB effective emissivity derived from ray-tracing simulations, radiance temperature and spectral radiance measurements and the uncertainty budget. Financial support of this work by the ESA project Novel Reference/Calibration System to Measure Spectral Radiance on the Range 4 μm to 100 μm is gratefully acknowledged.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Physics-Aware Data-Driven Surrogate Approach for Fast Atmospheric Radiative Transfer Inversion

Authors: Cristina Sgattoni, Luca Sgheri, Matthias Chung
Affiliations: Institute of BioEconomy, National Research Council - INdAM-GNCS Research group, Institute of Applied Mathematics, National Research Council, Department of Mathematics, Emory University
The Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission, selected in 2019 as the ninth Earth Explorer by the European Space Agency, aims to provide spectrally resolved measurements of the long-wave Earth's emitted radiance. FORUM will also cover the far-infrared region of the spectrum, which represents approximately 50\% of the Earth's outgoing longwave radiation and has remained largely unobserved from space until now. FORUM is scheduled for lunch in 2027 and, utilizing a Fourier transform spectrometer, it will provide valuable insights into atmospheric parameters such as surface emissivity, water vapor distribution, and ice cloud properties. Once operational, FORUM is expected to generate more than 10,000 spectra per day, resulting in a substantial data volume that will require efficient processing and analysis. To handle this, accelerated radiative transfer and inversion techniques will be essential. This is particularly important for near-real-time applications, such as weather and climate modeling, which lie at the core of the National Recovery and Resilience Plan - Earth Moon Mars (NRRP-EMM) project. The analysis of FORUM data involves solving an ill-posed inverse problem to retrieve atmospheric properties from observed spectra, requiring stabilization of the solution through regularization techniques. This study introduces a novel, data-driven approach to tackle the inverse problem in clear sky conditions, aiming to provide a computationally efficient and accurate solution. In the first phase, a preliminary approximation of the inverse mapping is generated using simulated FORUM data. In the second phase, climatological information is incorporated as prior knowledge, and neural networks are employed to dynamically estimate optimal regularization parameters during the retrieval process. While this method may not match the precision of traditional full-physics retrieval techniques, its ability to deliver near-instantaneous results makes it ideal for real-time applications. Additionally, the proposed approach can serve as a preprocessor, supplying improved prior estimates to enhance the accuracy and efficiency of full-physics retrieval methods. Furthermore, an ongoing study is being conducted on the inverse problem under all-sky conditions using an innovative approach that combines two key components. The first is a data-driven solution, leveraging an autoencoder to manage the high dimensionality of the problem and serving as a prior to guide the solution. The second is the implementation of constraints on the solution space to prevent non-physical approximations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Foreseeing the benefit of FORUM observations to evaluate climate models

Authors: Félix Schmitt, Quentin Libois, Romain Roehrig
Affiliations: Centre National de Recherches Météorologiques (CNRM), Université de Toulouse, Météo-France, CNRS
The spectral details of the Earth’s top-of-atmosphere outgoing infrared (IR) radiation, as already observed by several satellite instruments, contain valuable information on essential climate variables, making them critical tools for the evaluation of climate models. For example, it has been shown that error compensations in the spectral domain can result in apparently correct broadband fluxes, hiding deficiencies of the models in reproducing the seasonal and spatial variabilities of temperature and relative humidity (Huang et al. 2007, Huang et al. 2008). More recently, Della Fera et al. (2023) have investigated the interannual variability of IASI spectra in clear-sky conditions to point out systematic biases in the EC-Earth model. With the FORUM satellite mission, selected by the European Space Agency to be the 9th Earth Explorer and planned to be launched in 2027, the Earth top-of-atmosphere full IR emission spectrum will be measured for the first time at high spectral resolution, filling an observational gap in the far-infrared (FIR) (100 to 667 cm-1 i.e. from 15 to 100 μm). These measurements will provide a unique opportunity to further document and understand the Earth’s radiative budget, and will constitute a unique dataset to evaluate in more detail its representation in state-of-the-art climate models. Here we aim at estimating to which extent future FORUM observations could help discriminate between climate models in terms of their ability to correctly simulate the Earth’s IR emission. To this end, the fast radiative transfer solver RTTOV is used to emulate FORUM observations from the atmospheric profiles and surface properties simulated by a dozen of climate models participating to the 6th phase of the Coupled Model Intercomparison Project (CMIP6). These simulations cover the full period corresponding to the historical amip simulation, namely 1979 – 2014. We first focus on clear-sky scenes and compare the simulated spectra from each model, in terms of mean properties, but also in terms of spatial distribution, seasonal variability, and longer-term changes. The discrepancies between the selected climate models are highlighted and traced to differences in geophysical variables. While differences in the mid-IR can already be analyzed in the light of available hyperspectral IR observations, we point that differences also appear in the FIR, suggesting that FORUM observations will put a strong constrain on climate model evaluation and will contribute to the improvement of climate models by highlighting processes that need to be refined. References : Huang, Y., V. Ramaswamy, X. Huang, Q. Fu, and C. Bardeen (2007), A strict test in climate modeling with spectrally resolved radiances: GCM simulation versus AIRS observations, Geophys. Res. Lett., 34, L24707. Huang, X., W. Yang, N. G. Loeb, and V. Ramaswamy (2008), Spectrally resolved fluxes derived from collocated AIRS and CERES measurements and their application in model evaluation: Clear sky over the tropical oceans, J. Geophys. Res., 113, D09110. Della Fera, Stefano, F. Fabiano, P. Raspollini, M. Ridolfi, U. Cortesi, F. Barbara, et J. von Hardenberg (2023), On the use of infrared atmospheric sounding interferometer (IASI) spectrally resolved radiances to test the EC-Earth climate model (v3.3.3) in clear-sky conditions, Geosci. Model Dev., 16(4), 1379‑94.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SPectroscopy In The Far InfraREd: Reducing Uncertainties in Carbon Dioxide Spectroscopic Line Parameters for ESA’s FORUM Mission

Authors: Daniel Coxon, Jeremy Harrison, Ritika Shukla, Chris Benner, Malathy Devi, Brant Billinghurst, Jianbao Zhao
Affiliations: University of Leicester, National Centre for Earth Observation, College of William and Mary, Canadian Light Source, University of Saskatchewan
The upcoming ESA FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission will be the first to measure, at high resolution, the Earth's spectrally resolved outgoing longwave radiation (OLR) in the far-infrared (FIR). The FIR spectral region is crucially important because it is responsible for over half of the Earth’s emission to space, accounting for a large contribution to the Earth’s greenhouse effect. The aim of the FORUM mission is to evaluate the role of the FIR in shaping the current climate, thereby reducing the uncertainty in predictions of future climate change and enabling us to mitigate against its effects. The Earth’s OLR in the FIR region largely consists of absorptions from two gases, one of which is carbon dioxide (CO₂). The radiative forcing of the climate system associated with increasing CO₂ concentrations occurs primarily within the wavenumber region 500–850 cm-1. Almost half of this region (below 645 cm-1) has never been measured at the top-of-the-atmosphere (TOA) at high resolution, but will be measured by FORUM. The interpretation of measurements from FORUM is highly reliant on the ability to perform accurate radiative transfer calculations in the FIR region. Recent high-resolution measurements below 600 cm-1 have shown that there are significant deficiencies in the current Voigt line parameters in the High resolution TRANsmission (HITRAN) database. We report here a suite of measurements of high resolution (up to 0.00096 cm-1) spectra of both pure and air-broadened CO₂ taken at the Canadian Light Source that covers the entire 500–850 cm-1 region at once. The high spectral resolution has allowed for lines in heavily congested regions (such as the Q branches) to be well resolved for the lower pressure measurements. Utilising a synchrotron light source facility provides a more intense source of electromagnetic radiation than in a conventional laboratory, and provides access to the highest possible spectral resolution by a Fourier transform spectrometer. We analyse our spectra using the Labfit multispectrum fitting program, which has a long and extensive history in deriving non-Voigt line parameters for remote sensing. Through our analysis, we have begun to derive new CO₂ line parameters, including zero-pressure line position, line intensity, self- and air-broadened halfwidth, self- and air-pressure induced line shifts, speed dependence, Dicke narrowing, and line mixing. For position and intensity, we adopt quantum mechanical constraints to the global solution to reduce the correlations between parameters. We compare our results to those from the HITRAN database to demonstrate the improvements effected by our study.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: PREFIRE and IASI Radiances in All-Sky Conditions: Data Intercomparison and Analysis Using sigma-IASI/F2N

Authors: Tiziano Maestri, Michele Martinazzo, Fabrizio Masin, Guido Masiello, Giuliano Liuzzi, Carmine Serio, Brian Drouin, Brian Kahan, Nathaniel Miller, Aronne Merrelli, Kyle Mattingly, Tristan L'Ecuyer
Affiliations: University Of Bologna, Physics and Astronomy Department "Augusto Righi", University of Basilicata, Department of Engineering, NASA, Jet Propulsion Laboratory, Cooperative Institute for Meteorological Satellite Studies, Space Science and Engineering Center, Department of Climate and Space Sciences and Engineering, University of Michigan
The important role played by Far Infra-Red (FIR) radiation in shaping the Earth’s energy balance and its sensitivity to essential climate variables such as temperature, water vapor, surface emissivity, and clouds is now well recognized by the scientific community. In this regard, the European Space Agency (ESA) selected the Far-infrared Outgoing Radiation Understanding and Monitoring (FORUM) mission as its ninth Earth Explorer, scheduled to launch in 2027. FORUM will collect measurements of the outgoing longwave radiation in the spectral range from 100 to 1600 cm−1, with 0.5 cm−1 (un-apodized) spectral resolution. On its side, NASA launched in 2024 two CubeSats carrying the Polar Radiant Energy in the Far-InfraRed Experiment (PREFIRE) which are measuring the 0-54 𝜇m region at 0.84 𝜇m spectral resolution. The FORUM and PREFIRE missions boosted the studies of the radiative transfer (rt) community to extend fast radiative routines to the whole IR region, which required the assessment of the ability of commonly used fast solutions to simulate radiance fields at FIR wavelengths. In this work, we focus on the performances of the main rt models based on physical solutions applied to all sky conditions. A special attention is provided to algorithms operating at FIR and in presence of scattering layers (such as clouds and aerosols) which are adopted in inversion processes for the definition of satellite level 2 products, or simply for the analysis of spectrally remotely sensed radiance fields. The work briefly discusses the limits and advantages of fast methodologies based on the Chou approximation [Chou et al., 1999; Martinazzo et al., 2021] and the Tang adjustement solution [Tang et al., 2018] when applied to radiance computations [Maestri et al., 2024] which are implemented in the new sigma-IASI/F2N rt model [Masiello et al., 2024]. To assess the validity of the approximate numerical calculation and the overall algorithms performances, results obtained using fast solutions are compared with those derived with a discrete-ordinate based rt model (DISORT) for a large range of physical and optical properties of ice and liquid water clouds and for multiple atmospheric conditions derived from the 60 levels EUMETSAT NWP model profile dataset (https://nwp-saf.eumetsat.int/site/software/atmospheric-profile-data/). Finally, a set of observations of the Infrared Atmospheric Sounding Interferometer (IASI) flying on MetOp B, and C is compared with temporally and spatially collocated PREFIRE data. Multiple atmospheric conditions and geolocations are considered. The effectiveness of sigma-IASI/F2N is then demonstrated by comparing synthetic calculations to collocated IASI and PREFIRE observations. The code ingests an atmospheric state vector which includes surface temperature and Temperature profile, H2O mixing ratio, O3 mixing ratio, Specific Liquid and Ice Water Content derived from the ECMWF analysis. The comparison aims at evaluating the sigma-IASI/F2N performances at both Mid-infrared and FIR wavelengths in all sky conditions. Possibly, for a limited set of cases, the information content of PREFIRE observations is estimated by applying the scheme described in Serio [2024] (based on the sigma-IASI/F2N forward model) for cloud optical and microphysical retrievals. Acknowledgements This work is founded by Italian Space Agency (ASI) in the framework of the project FIT-FORUM (Accordo attuativo: n. 2023-23-HH.0). References M.-D. Chou, K.-T. Lee, S.-C. Tsay, and Q. Fu. “Parameterization for Cloud Longwave Scattering for Use in Atmospheric Models”. Journal of Climate 12(1) (1999), pp. 159–169. doi: https:10.1175/1520- 0442(1999) T. Maestri et al. “Innovative solution for fast radiative transfer in multiple scattering atmospheres at far and mid infrared wavelengths”. Radiation Processes in the Atmosphere and Ocean, New York, AIP Publishing, «AIP CONFERENCE PROCEEDINGS», 2024, 2988, pp. 1 - 4 (International Radiation Symposium, Thessaloniki (Greece), 4-8 July 2022). doi:10.1063/5.0183019 M. Martinazzo et al. “Assessment of the accuracy of scaling methods for radiance simulations at far and mid infrared wavelengths”. Journal of Quanti- tative Spectroscopy and Radiative Transfer 271 (2021). doi:10.1016/j.jqsrt.2021.107739. G. Masiello et al. “The new 𝜎-IASI code for all sky radiative transfer cal- culations in the spectral range 10 to 2760 cm-1: 𝜎-IASI/F2N”. Journal of Quantitative Spectroscopy and Radiative Transfer 312 (2024), p. 108814. issn: 0022-4073. doi:10.1016/j.jqsrt.2023.108814. C. Serio et al. “Demonstration of a physical inversion scheme for all-sky, day-night IASI observations anda pplication to the analysis of the onset of the Antarctica ozone hole: Assessment of retrievals and consistency of forward modeling”, Journal of Quantitative Spectroscopy and Radiative Transfer,Volume 329, 2024,109211, ISSN 0022-4073, https://doi.org/10.1016/j.jqsrt.2024.109211. G. Tang et al. “Improvement of the Simulation of Cloud Longwave Scatter- ing in Broadband Radiative Transfer Models”. Journal of Atmospheric Sciences 75(7) (2018), pp. 2217–2233. doi:10.1175/JAS-D-18-0014.1
LPS Website link: PREFIRE and IASI Radiances in All-Sky Conditions: Data Intercomparison and Analysis Using sigma-IASI/F2N&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Simulation of the Earth’s disk radiance seasonal variability observed from the Moon by the Lunar Earth Temperature Observatory

Authors: Dr Gianluca Di Natale, Dr Simone Menci, Dr Luca Palchetti, Marco Ridolfi, Dr Claudio Belotti, Dr Silvia Viciani, Dr Marco Barucci, Dr Francesco D'Amato
Affiliations: CNR-INO
Within the Earth-Moon-Mars (EMM) project it is planned to develop a lunar infrastructure to monitor the global far- and mid- infrared (FIR/MIR) spectral radiance coming from the whole Earth’s disk. The Lunar Earth Temperature Observatory (LETO) will be part of this infrastructure and will consists of a Fourier transform spectro-radiometer (LETO-FTS) and an imager (LETO-IMG). To mimic the LETO’s measurement, a comprehensive software was developed at the CNR-National Institute of Optics. This is composed of an Earth-Moon orbital simulator, which can provide the Earth’s portion viewed from the Moon as a function of time and position of the lunar base, and a radiative transfer algorithm to simulate the spectral radiance that will be observed by LETO. Lunar orography can also be considered in the orbital simulator. The radiative transfer algorithm simulates the spectral radiances emitted from each Earth’s single pixel, by using the σ-FORUM fast radiative transfer model. The average of the simulations subsequently provides the the total mean radiance measured by LETO. As an example, hourly simulations of the whole spectral radiance of the visible portion of the Earth’s disk are obtained for a specific day, for a lunar site located on the prime meridian at a latitude of -70°. Also annual simulations were performed to define the measurement requirements. The development of the radiative transfer algorithms will take advantage from the modelling activity conducted for the preparation of the ESA FORUM mission, which will provide a similar spectral measurement from polar low Earth orbit some years in advance from the potential deployment of LETO on the lunar base. In this presentation, the study of the time variability of the signal over specific spectral bands, between 100 and 1600 cm-1, and the correlations with the variability of geophysical parameters, such as the global outgoing longwave radiation, the average global temperature, the water vapour amount, etc., will be presented. This approach will allow to build a long-term dataset for monitoring climate variables.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Exploiting airborne far-infrared measurements to optimise an ice cloud retrieval.

Authors: Sanjeevani Panditharatne, Prof Helen Brindley, Dr Caroline Cox, Richard Siddans, Jonathan Murray, Dr Richard Bantges, Dr Stuart Fox, Dr Rui Song
Affiliations: Imperial College London, RAL Space, NERC National Centre for Earth Observation, Met Office, University of Oxford
Cirrus clouds, high altitude ice clouds, regularly cover ∼30% of the Earth, and can reflect shortwave radiation back to space or trap the outgoing longwave radiation. There is currently disagreement between climate model predictions of their net radiative effect and feedback processes, due to the variation in their micro-and macrophysical properties. Studies have indicated the far-infrared region is highly sensitive to the microphysics of cirrus clouds, particularly their ice crystal habit. This sensitivity can be exploited in retrievals from FORUM observations, with studies showing that the inclusion of a few far-infrared channels alongside the mid-infrared has the potential to improve the cloud retrieval products and reduce uncertainty in them. We test this theory on unique airborne observations of the upwelling far-infrared radiation for the first time. We use the Infrared and Microwave Sounding (IMS) retrieval scheme, developed at RAL Space, to perform an optimal estimation retrieval on an airborne observation of coincident far- and mid-infrared upwelling radiances taken above a cirrus cloud. Recent work has extended this retrieval scheme for use on FORUM, including testing on clear sky retrievals from airborne observations that have modified to mimic the FORUM Sounding Instrument’s instrument line shape. In this work, we simultaneously retrieve temperature, water vapour, cloud optical thickness, cloud effective radius, cloud top height, and ice crystal habit with and without far-infrared channels. To model the radiative effects of the ice crystal habit, we use the Yang et al. 2013 and Baum et al. 2014 Solid Columns and General Habit Mix bulk optical property models, and evaluate if known uncertainties within them significantly impact the retrieval quality. All the retrievals are evaluated against lidar, cloud probe, and MODIS measurements of the cloud, as well as dropsonde measurements of the temperature and water vapour profile. This cloud retrieval capability is the first of its kind to be tested on airborne observations of upwelling far-infrared radiances and will be available for use on FORUM observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SPectroscopy In The Far InfraREd: Reducing Uncertainties in Water Vapour Spectroscopic Line Parameters for ESA’s FORUM Mission

Authors: Daniel Coxon, Jeremy Harrison, Chris Benner, Malathy Devi, Dominique Appadoo, Corey Evans
Affiliations: University of Leicester, National Centre for Earth Observation, College of William and Mary, Australian Synchrotron, Swinburne University of Technology
The upcoming ESA FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) mission will, for the first time, measure the Earth’s spectrally resolved outgoing longwave radiation (OLR) in the far-infrared (FIR) at high resolution. The FIR spectral region is highly significant as it accounts for more than half of the Earth’s emission to space. It thus provides a substantial contribution to the Earth’s greenhouse effect. The main goal of the FORUM mission is to evaluate the role of the FIR in shaping the current climate, in order to reduce the uncertainty in future climate change predictions and enable us to mitigate against its effects. The vast majority of the Earth’s OLR in the FIR region consists of absorptions from two gases, one of which is water vapour (H₂O). The radiative forcing (RF) of climate associated with increases in H₂O concentrations has a substantial contribution in the FIR at wavenumbers below 600 cm-1. The interpretation of measurements from FORUM is highly reliant on our ability to perform accurate radiative transfer calculations in the FIR region. Recent high-resolution measurements below 600 cm-1 have shown that there are significant deficiencies in the Voigt line parameters within the High resolution TRANsmission (HITRAN) database. We report here a suite of measurements of high resolution (0.00096 cm-1 and below) FIR spectra of both pure and air-broadened H₂O taken at the Australian Synchrotron. Standard laboratory light sources used by Fourier transform infrared (FTIR) spectrometers to not provide sufficient signal-to-noise ratios in the FIR below 700 cm-1. However, by utilising synchrotron light source facilities, which provide both an intense source of electromagnetic radiation and a wide band coverage across the FIR, it is possible to measure high resolution, high signal-to-noise spectra in the FIR region. Our spectra are analysed using the Labfit multispectrum fitting program, which has been used for many years to derive non-Voigt line parameters for remote sensing. Through our analysis, we have begun to derive new H₂O line parameters including the zero-pressure line position, line intensity, self- and air-broadened halfwidth, self- and air-pressure induced line shifts, and speed dependence. We compare our results to those from the HITRAN database, to demonstrate the improvements effected by our study.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Selection of Informative Channels for Future FORUM Measurements Assimilation in Numerical Weather Prediction Models

Authors: Cristina Sgattoni, Stefano Della Fera, Ugo Cortesi, Samantha Melani, Alberto Ortolani, Marco Ridolfi
Affiliations: Institute of BioEconomy, National Research Council, Institute of Applied Physics, National Research Council, National Institute of Optics
FORUM (Far-infrared Outgoing Radiation Understanding and Monitoring) is the ninth Earth Explorer satellite mission, selected by the European Space Agency in 2019. FORUM, scheduled for launch in 2027, will host a Fourier transform spectrometer providing spectrally resolved measurements of the longwave Earth's emitted radiance in the range 6.25–100 ηm (100 to 1600 cm-1). FORUM will also cover the far-infrared part of the spectrum that accounts for about 50% of Earth's outgoing longwave radiation and, until recently, has never been systematically observed from space. These measurements are important because at these wavelengths, outgoing radiation is affected by water vapour and ice clouds. Once operational, the FORUM spectrometer is anticipated to produce more than 10,000 spectra per day. A key goal of the NRRP-EMM (National Recovery and Resilience Plan - Earth Moon Mars project) is the assimilation of the FORUM dataset into weather prediction models. Our study focuses on selecting a subset of the most informative spectral channels within the FORUM spectral range (i.e., more than 5000 channels) to be used in weather prediction models, trough data assimilation technique. Reducing information redundancy is crucial when managing large datasets for near-real-time applications. The methodology begins with calculating weighting functions, which quantify the sensitivity of transmittance to changes in altitude and are essential to analyze the vertical distribution of atmospheric properties. Each weighting function is associated with a specific spectral channel, highlighting the altitude where the channel has the highest sensitivity to the observed radiance. Using the McMillin and Goldberg channel selection algorithm, the most informative weighting functions, and hence spectral channels, are selected independently for each atmospheric layer. The final set of channels to be assimilated is determined by aggregating the channels chosen across all layers. To assess the reliability and the effectiveness of the selected FORUM channels, we employ an optimal estimation framework to evaluate errors and quantify the information loss. Numerical experiments are conducted using 80 clear-sky scenarios derived from a diverse profile dataset designed to represent global and seasonal atmospheric variability. Simulated measurements are generated using the σ-FORUM fast radiative transfer model. Starting from pre-computed look-up tables of optical depths, σ-FORUM generates high-resolution radiances (0.01 cm-1), which are convolved with the FORUM spectral response function.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards the Assimilation of Far Infrared Data: Case Studies With Low and Mid Complexity Models

Authors: Lorenzo Mele, Carlo Grancini, Paolo Ruggieri, Tiziano Maestri, Guido Masiello, Alberto Ortolani, Alberto Carrassi
Affiliations: Department of Physics and Astronomy, University Of Bologna, School of Engineering, University of Basilicata, National Research Council of Italy, Institute for the Bioeconomy (CNR-IBE), Consortium Laboratory of Environmental Monitoring and Modelling for the Sustainable Development (Consorzio LaMMA)
The FORUM (Far-Infrared Outgoing Radiation Understanding and Monitoring) mission is ESA's ninth Earth Explorer mission, with launch planned in 2027. FORUM will provide measurements in the FIR spectral region (100-667 cm-1) of the outgoing longwave radiation (OLR) of Earth, with a spectral resolution never achieved before. The FIR spectrum is uniquely sensitive to the upper troposphere and low stratosphere (UTLS) water vapour (WV) distribution, to the surface emissivity at high latitudes and to the optical properties of high-level clouds. Besides the tremendous impact on diagnosing atmospheric conditions in real time, the above properties are observed with great interest by the data assimilation (DA) community. The potential benefit of the upcoming FIR data in a DA process is not obvious, nor it is exempt from challenges to its feasibility. In the context of the ASI (Agenzia Spaziale Italiana) projects MC-FORUM (Meteo and Climate exploitation of FORUM) and in collaboration with FIT-FORUM (Forward and Inverse Tool for FORUM), we are studying how to assimilate FIR data and their impact within a DA cycle. The present work shows preliminary results along this perspective. We intentionally leverage on the use of a low-order model: a multilayer version of the Lorenz-96 model, already used in previous theoretical studies about DA of satellite data. The use of a low complexity model allows one to easily control the experimental setup, while reducing the computational burden. We adopt a state-of-the-art ensemble-based DA scheme. The main challenges of the study are (i) the construction of a simple yet adequate observational operator H to generate synthetic FIR observations, (ii) the vertical localization of such data, which are vertical integrals over an atmospheric column and (iii) the capability to discriminate the impacts of FIR measurements into the overall assimilation process. We will illustrate the potential of FIR against regular infrared data. This work is part of an incremental research strategy, whose next step consists into performing a similar analysis, but with the use of an intermediate complexity atmospheric model (SPEEDY, Simplified Parameterizations, primitivE-Equation DYnamics) and a fast radiative transfer model (σ-IASI/FORUM) to generate the FIR radiances to be assimilated. The experimental setup using SPEEDY will also be detailed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.10.01 - POSTER - EO for Mineralogy Geology and Geomorphology

Earth observation is an important source of information for new and improved geological, mineralogical, regolith, geomorphological and structural mapping and is essential for assessing the impact of environmental changes caused by climatic and anthropogenic threats. Given the increasing demand for mineral and energy resources and the need for sustainable management of natural resources, the development of effective methods for monitoring and cost-effective and environmentally friendly extraction is essential.
In the past, the use of multispectral satellite data from Landsat, ASTER, SPOT, ENVISAT, Sentinel-2 or higher resolution commercial missions, also in combination with microwave data, has provided the community with a wide range of possibilities to complement conventional soil surveys and mineralogical/geological mapping/monitoring e.g. for mineral extraction. In addition, discrimination capabilities have been enhanced by hyperspectral data (pioneered by Hyperion and PROBA), which are now available through several operational research satellites and will be commissioned by CHIME.
The session aims collect contributions presenting different techniques to process and simplify large amounts of geological, mineralogical, and geophysical data, to merge different datasets and to extract new information from satellite EO data to support with a focus on mine site lifecycles.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities based on Satellite Earth Observation

Authors: PhD. Jan Růžička, Robin Bouvier, Lukáš Brodský, Ing. Tomáš Bouček, Associate Professor Mike Buxton, PhD. Feven Desta, PhD. Mwansa Chabala, PhD. Martin Landa, Francisco Luque, PhD. Glen Nwaila, Shruti Pancoli
Affiliations: Charles University, Department of Applied Geoinformatics and Cartography, Cybele Lawgical, Lda, Czech Technical University in Prague, Department of Geomatcis, University of Delft, Resource Engineering, Copperbelt University, ISMC-IBERIAN SUSTAINABLE MINING CLUSTER, University of Witwatersrand
The Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities (GAIA-TSF) project aims to advance tailings storage facility (TSF) monitoring and risk assessment by integrating satellite Earth observation (EO), machine learning (ML), and in-situ data. GAIA-TSF addresses limitations in current TSF monitoring, including disconnected data pipelines, stakeholder-relevant integration challenges, and time-consuming analysis of large data sets. The project objectives are: 1: Establish a set of Key Variables for monitoring Tailings Storage Facility (TSF) anomalies and the risk of failures and associate the prototype design supporting monitoring based on satellite data. 2: Identify synergies between suitable Satellite Earth Observation (SatEO) technologies, mining engineering data, and Machine Learning to foster the efficiency of TSF operational monitoring. 3: Design, Develop, and Optimise a prototype integrating satellite data, in-situ data, and ML models to support the identification, explanation, and prediction of TSF risk. Innovative satellite-based methods, including multispectral and hyperspectral imaging, will be combined with geotechnical and environmental datasets to analyze key variables such as soil stability and hydrological parameters. These datasets are used to train an advanced ML/DL architecture to detect anomalies and assess risk over time. In addition, eXplainable AI (xAI) techniques increase transparency by explaining the relationships between key variables and risk factors, facilitating stakeholder engagement and informed decision-making. The first case study, the 2022 Jagersfontein dam collapse, demonstrates the potential of the GAIA-TSF approach. The 2022 Jagersfontein Tailings Dam Collapse was a structural failure of a mine tailings dam near Jagersfontein, situated in the Free State province of South Africa. This resulted in a mudslide. Sentinel-2 time series data and Random Forest algorithms are used to monitor anomalies and assess the environmental impact after the disaster. The approach transforms time series into features using moving averages, lagged values, and differences. The concept of lag refers to the use of previous values from time series as features to predict the current or future class. The appropriate choice of lag depends on the temporal dynamics of the data collected by Sentinel-2. This approach revealed widespread damage to infrastructure, ecosystems, and agricultural land, underscoring the need for robust TSF monitoring systems. Acknowledgments The authors gratefully acknowledge the support of the European Union through the Horizon Europe Framework Programme for Research and Innovation under the project Geospatial Artificial Intelligence Analysis for Tailings Storage Facilities (GAIA-TSF), grant agreement number 101180263. The European Union Agency for the Space Programme (EUSPA) manages this project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Role of Copernicus Data and Copernicus Contributing Missions to Raw Materials Mining Life Cycle: Outcomes From S34I

Authors: Ana Claudia Teodoro, Joana Cardoso-Fernandes, Maria Mavroudi, Rushaniia Gubaidullina, Michael Tost, Krištof Oštir, Tanja Grabrijan, Mihaela Gheorghe, Marta Alonso Fernández, Myriam Montes, Alicia Garcia, Maria Fernandez, Antonio Peppe, Francesco Falabella, Fabiana Calò, Andreas Knobloch, Nike Luodes, Fahimeh Farahnakian, Francisco Javier González Sanz, Wai L. Ng-Cutipa, Ana B. Lobato, Irene Zananiri, Vaughan Williams, Matthias Siefert, Hannes Blaha, Marko Savolainen, Bjorn Fossum, Enoc Sanz Ablanedo, Mercedes Suarez, Petri Nygren, Victoria Jadot, Patricia Santos
Affiliations: Instituto de Ciências da Terra, Departamento de Geociências, Ambiente e Ordenamento do Território, Faculdade de Ciências, Universidade do Porto, Montanuniversität Leoben, University of Ljubljana, Faculty of Civil and Geodetic Engineering, GMV Innovating Solutions SRL, International Center for Advanced Materials and raw materials of Castile and Leon (ICAMCYL), Iberian Sustainable Mining Cluster (ISMC), National Research Council of Italy, Institute for the Electromagnetic Sensing of the Environment (CNR), Beak Consultants GmbH, Geological Survey of Finland (GTK), Geological Survey of Spain (IGME-CSIC), Hellenic Survey of Geology & Mineral Exploration (HSGME), Aurum Exploration, Omya, VTT Technical Research Centre of Finland Ltd, Ecotone AS, University of Léon, Department of Mining Topography and Structure, University of Salamanca, Geology Department, SPECTRAL MAPPING SERVICES SMAPS OY, Eurosense Belfotop BV/SRL
The Horizon Europe S34I project aimed to explore new data-driven methods to analyse Earth Observation (EO) data for systematic mineral exploration, continuous monitoring of extraction, closure and post-closure activities, and increase European autonomy regarding raw materials. This work aims to present the principal outcomes of the S34I project focusing on the processing of Copernicus and Copernicus Contributing Missions (CCM). S34I results were validated and demonstrated at six different sites as industrial-relevant environments, considering all phases of the mining life cycle: (i) onshore exploration (Áramo mine, Spain); (ii) exploration in the coastal-marine transition (Ria de Vigo, Spain); (iii) active open-pit mine (Gummern mine, Austria); (iv) closed mine with acid mine drainage – AMD – problems (Lausitz in Germany and Outokumpu in Finland); and (v) closed mine with subsidence issues (Aijala mine, Finland). In the onshore pilot, the potential of Sentinel-1 and Sentinel-2 data was assessed using classical image processing techniques such as RGB combinations, band ratios, selective Principal Component Analysis (PCA) and unsupervised learning (K-means). A new ensemble artificial intelligence (AI) method was developed by integrating Support Vector Machines (SVM), Random Forest (RF), and Artificial Neural Networks (ANNs) to exploit Sentinel-2, Landsat-9 and PRISMA data. Different types of data were assessed to improve EO-based structural mapping, namely satellite Synthetic Aperture Radar (SAR) data from Sentinel-1, ALOS PALSAR-2 and COSMO-SkyMed and airborne Light Detection and Ranging (LiDAR) data. New airborne LiDAR and hyperspectral datasets were acquired. An AI algorithm was developed for automated hyperspectral airborne data preprocessing requiring minimal ground truth data. Airborne hyperspectral data was processed through a combination of PCA, endmember extraction, K-means clustering, band ratios, minimum wavelength mapping and Spectral Angle Mapper (SAM). Data fusion of LiDAR and PRISMA data was combined with geochemistry analysis utilising the Self-Organizing Map (SOM) technique and the K-means clustering algorithm to improve the knowledge of Cobalt distribution. Mineral predictive maps were produced, integrating airborne hyperspectral, geological and structural data with ground spectral and geochemical data using ANNs. A spectral library has been elaborated for ground truth/calibration of EO-related methods and to determine the spectral signatures of outcropping rocks and soils. Classical image processing techniques on Sentinel-2 and Landsat-9 data were tested for coastal-marine transition exploration, such as RGB combinations, band ratios, selective PCA and K-means. Additionally, optical satellite data from the WorldView-2 and -3 platforms was processed to detect and map the placer deposits using spectral unmixing and Object-Based Image Analysis (OBIA) methods. Spectral unmixing was also applied to EnMap hyperspectral data. The potential of Sentinel-1 radar data for beach placer exploration was assessed through unsupervised learning through K-means classification using the textural analysis results as inputs. A Sentinel-2 satellite-derived bathymetry (SDB) processing chain was developed based on the ACOLITE processor and ensemble machine learning algorithms. In the scope of the S34I project, an innovative Remotely Operated Vehicle (ROV) carrying an Underwater Hyperspectral Image (UHI) system was deployed to acquire data from the seafloor. Complementary spectral libraries were created considering seafloor, beach, coastal rock outcrops and heavy mineral concentrate samples. Thematic maps were produced at regional and local scales to show geological features integrated into the coastal environment, considering the background and new data collected in the project (e.g., UAV surveys and studies of the samples collected). In addition, for the first time, we know the extent (in square m) of surface placer occurrences on beaches. For the active mine representing the extraction mining life phase, Pléiades Neo tri-stereo and WorldView-2 satellite images were used to produce Digital elevation models (DEMs), later compared to the DEMs produced from high-resolution UAV surveys conducted in the scope of the project. Volume maps of mining waste deposits and stockpiles were created using Structure from Motion (SfM) UAV photogrammetry. Low-cost dual-frequency GNSS sensors were utilised to establish low-cost GNSS Monitoring Stations (LGMS) to estimate displacements. Deep Learning (DL) techniques were used to enhance the resolution of Sentinel-1 SAR data by a factor of 4 using COSMO-SkyMed satellite data. Amplitude-based change detection techniques were applied to Sentinel-1 data. Three-dimensional (3D) ground displacement maps were produced using multitemporal Sentinel-1 and COSMO-SkyMed datasets processed through advanced Interferometric synthetic-aperture radar (InSAR) techniques. In the closed mines with AMD problems, Sentinel-2, PRISMA, WorldView-3 and UAV data were employed to map AMD constituents using both supervised (ANN, logistic regression, RF, K-nearest neighbours) and unsupervised (SOM, K-means) approaches. Lastly, the ground monitoring tools developed in the extraction site were tested in the Aijala closed mine with subsidence problems. In conclusion, S34I developed different techniques to process and extract new information from EO data, particularly Copernicus and CCM satellite data. EO data was integrated with geological, mineralogical and geochemical data whenever possible. This way, S34I outcomes will support mining activities in all lifecycles, contributing to the path towards responsible mining while creating social and economic impact through EO uptake. This study is funded by the European Union under grant agreement no. 101091616 (https://doi.org/10.3030/101091616), project S34I – SECURE AND SUSTAINABLE SUPPLY OF RAW MATERIALS FOR EU INDUSTRY.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Pattern-based Sinkhole Detection In Kazakhstan From Sentinel-1 And -2 Data

Authors: M. Eng. Simone Aigner, Andreas Schmitt, M. Sc. Sarah Hauser
Affiliations: Hochschule München University Of Applied Sciences, Institute for Applications of Machine Learning and Intelligent Systems (IAMLIS)
Sinkholes are prevalent in karst regions and pose significant geohazards due to their sudden appearance and potential to disrupt landscapes and infrastructure. These geological structures result from the collapse of subterranean cavities. Typically, sinkholes manifest as funnel or box-shaped depressions caused by the erosion of limestone, commonly occurring in areas where groundwater flows through porous rock layers and eventually causes these cavities to collapse [1]. Recognized as potential geohazards, sinkholes threaten structural stability and pose environmental risks by linking surface water with aquifers, thereby elevating the risk of drinking water contamination. The critical nature of these events necessitates robust monitoring and detection systems to identify vulnerable areas early and implement protective measures. This need is particularly acute in Kazakhstan, a region where sinkhole incidents are notable, but understudied, posing risks to crucial developmental projects such as the Hyrasia One initiative. This ambitious project, aiming to harness Kazakhstan’s green hydrogen potential, depends on the geological stability of the terrain. Comprehensive mapping and understanding of sinkhole formation within the project's area are imperative to mitigate risks and safeguard future infrastructural integrity. Current methodologies for detecting sinkholes predominantly rely on digital terrain models (DTMs) and manual interpretations using geographic information systems (GIS). These methods, effective under certain conditions, face several limitations that can impede their utility, especially in remote places like Kazakhstan: • Resolution and Availability: High-resolution DTMs are crucial for accurate sinkhole detection. However, in many regions, especially those that are less economically developed or geographically accessible, such data may not be readily available or may be prohibitively expensive to acquire. • Manual Effort and Expertise: The traditional use of GIS for sinkhole detection typically requires substantial manual effort and considerable expertise, making the process time-consuming and dependent on the availability of skilled personnel. • Environmental Limitations: Methods that rely on physical surveys or indirect indicators can be less effective in regions with extensive vegetation cover or in urban areas where built environments obscure the ground surface. Dynamic Conditions: Many existing methods do not adequately account for the dynamic nature of sinkholes that develop or change rapidly due to factors like extreme precipitation, making real-time or near-real-time monitoring more challenging. Given these limitations, there is a clear need for an innovative approach that enhances the efficiency, accuracy, and applicability of detection technologies. We address these needs by harnessing the capabilities of the European Space Agency’s Sentinel satellites in the Copernicus program. Utilizing the 10m Red, Green, Blue, and Near-Infrared bands of the Multispectral Imager (MSI) on ESA's Sentinel-2 satellites, this novel detection process moves away from the conventional reliance on artificially illuminated shaded images produced by DTMs. By employing natural sunlight as a light source in our analyses, we effectively capture distinctive shadow patterns as indication of sinkholes. These patterns are particularly pronounced in the sparsely vegetated terrains of Kazakhstan, where natural light provides a more authentic and detailed view of the terrain. Geological features are highlighted more accurately and extensively by this method of natural solar illumination than by conventional approaches using rendering methods of computer-aided design (CAD) on DTMs and visual inspection. As the principles of processing both DTM and satellite data align, this automatic image processing methodology can also be easily applied to terrain data from airborne LiDAR flights in built-up and vegetated areas, ensuring broad applicability. The workflow for automated sinkhole detection using satellite data is meticulously organized into three central steps: · data pre-processing and fusion, · sinkhole detection by morphology, and · geospatial analysis of the detected structures. The process begins with intensive processing of optical satellite data to enhance the visibility of geological structures. We specifically utilize datasets from varying seasonal conditions: capturing the humid vegetative state in February and the arid bare landscape in August. This approach accommodates environmental variations that significantly impact visibility and detection capabilities. A temporal-spectral fusion technique [2] is applied over time, critical in enhancing our ability to discern subtle variations in the terrain. Additionally, advanced techniques such as Principal Component Analysis (PCA) and the application of Kennaugh elements are employed to maximize the definition and clarity of surface structures, crucial for accurately identifying potential sinkholes. In the main phase of detection, we employ a multi-scale approach using the Laplacian of Gaussian (LoG) like filter similar to the one used for Multiscale Multilooking [3]. This filter is exceptionally suited for identifying the round, funnel-shaped depressions characteristic of sinkholes as photonegative of the point scatterers in radar which it was originally designed for [3]. It is strategically applied at multiple scales to accurately pinpoint sinkholes of various sizes, with the lowest points (argmin) serving as key indicators. To minimize background noise and enhance the detection accuracy, thresholds are carefully applied to these low points. Challenges posed by low-growing vegetation, which can lead to false detections, are addressed through the development of the Combined Vegetation Doline Index (CDVI). This index effectively differentiates between low vegetation and actual dolines through the bitemporal fusion of Sentinel-1 and Sentinel-2 data via Hypercomplex data fusion [2], providing a clear distinction that is crucial for accurate classification. Following the detection phase, a comprehensive spatial analysis is conducted. This analysis maps the distribution of detected sinkholes and examines their clustering, which unveals valuable new insights into potential underlying geological processes. The sinkholes identified by the LoG filter are combined with the results from the CDVI to produce a final classification. This classification sorts the sinkholes into three categories: confirmed sinkholes, sinkholes with vegetation overlay, and probable sinkholes requiring further investigation. This advanced, systematic workflow not only refines the sinkhole detection process but also integrates these findings into broader geological and infrastructural planning. Such an integration is crucial for managing risks and supporting sustainable development in sinkhole-prone regions like Kazakhstan, where understanding and mitigating geohazards are essential for the safety and sustainability of development projects. The results of the present study on the automated detection of sinkholes show an accuracy of 92% for sinkholes, especially in the arid months when compared with the merged reference dataset. The comparison of the different individual datasets with the developed reference dataset shows that the principal component analysis in the arid month, which has a precision of 88% for sinkholes. By conducting a spatial analysis, the hypothesis that sinkhole clusters often run parallel to surface watercourses can be confirmed. This observation allows us to draw conclusions about the underlying watercourses and geological processes in the area. It further enables to designate territories with only few to no dolines which can be seen as geologically stable in the long term. A significant advantage of this methodology is its transferability to other regions through the use of freely available and up-to-date remote sensing data (Sentinel-1 and -2). Especially in areas with limited geological reference data, such as in Kazakhstan, this technique proves to be an most useful tool for the comprehensive mapping of sinkholes and the derivation of correlations. The process is also automated to such an extent that large areas can be mapped rapidly. For the first time, this study enables the comprehensive annual mapping of sinkholes, especially in the context of climate change. Changes in precipitation patterns, such as an increase in heavy rainfall events or longer periods of drought, can have a significant impact on the development and frequency of sinkholes. Monitoring also plays a role in connection with water protection, crisis management, and soil conservation. Automated sinkhole detection therefore also provides the basis for long-term monitoring of these geological structures in order to identify and minimize potential risks at an early stage. [1] D. Ford and P. D. Williams, Karst Hydrogeology and Geomorphology, John Wiley & Sons, 2007. [2] Schmitt, A.; Wendleder, A.; Kleynmans, R.; Hell, M.; Roth, A.; Hinz, S. Multi-Source and Multi-Temporal Image Fusion on Hypercomplex Bases. Remote Sens. 2020, 12, 943. [3] A. Schmitt, A. Wendleder, and S. Hinz, "The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation," ISPRS J. Photogramm. Remote Sens., vol. 102, pp. 122–139, Apr. 2015, doi: 10.1016/j.isprsjprs.2015.01.007.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Fusing EnMAP and Sentinel for resolution enhanced geological mapping

Authors: Sam Thiele, Dr Rupsa Chakraborty, Dr Parth Naik, Dr Richard Gloaguen
Affiliations: Helmholtz Institute Freiberg, Helmholtz Zentrum Dresden Rossendorf, Centre for Advanced Systems Understanding, Helmholtz Zentrum Dresden Rossendorf
The emerging generation of hyperspectral satellites provide a wealth of new data for geological mapping, allowing subtle mineralogical signatures and trends to be identified from space. In areas with suitable exposure this will facilitate improved geological mapping, allow exploration for much-needed mineral deposits, and help manage environmentally hazardous geomaterials (e.g., tailings, acid mine drainage). However, the inevitable compromise between coverage and resolution currently limits hyperspectral data to 30×30 m pixels. This reduces the resolvability of geological structures relative to higher spatial sampling (but lower spectral resolution) sensors such as Sentinel-2 or RapidEye. Various techniques have been proposed for super-resolving hyperspectral data, but these often favour cosmetics ( e.g., by injecting spatial features related to overall pixel brightness, but adding little additional spectral information) over spectral fidelity. Furthermore, many existing tools can result in non-realistic spectral distortions and so fail to preserve spectral integrity. In this contribution we present a novel data fusion and resolution enhancement approach that combines low spatial resolution (high ground sampling distance) hyperspectral data with high-spatial resolution multi-spectral or RGB information, to derive a resolution enhanced hyperspectral image. Unlike established approaches, our method uses a small (9x9 pixel) sliding-window to learn very local relationships, allowing correlations that would be invalid at the large scale to inform the resolution enhancement. Additionally, our approach is inherently conservative, adding spatial detail where informative correlations are found, but defaulting to the original (lower spatial resolution) hyperspectral data if only poor relationships are found. This results in a balance of meaningfully enhanced spatial detail, while rigorously preserving hyperspectral information and features. We tested our approach using EnMap and Sentinel-2 data covering the Lofdal carbonatite complex in Namibia. These data nicely capture the regional geological structures, including dykes, faults and large-scale folding. After resolution enhancement using our data fusion approach, we were able to resolve intricate and geologically sensible marker horizons (bedding) that were otherwise difficult to detect using the EnMap data alone. Similarly, band ratio analyses performed to map carbonate minerals on the 30-m EnMAP and 10-m resolution enhanced data gave highly correlated results. These both resolve carbonatite dykes, but in much more detail after the resolution enhancement. Spectral interpretation confirmed the hyperspectral integrity of the resolution-enhanced dataset, with meaningful changes in absorption depths for pixels containing significant sub-pixel variation (e.g., around the dykes), and little change in spatially homogeneous regions. In conclusion, we suggest that our novel resolution enhancement method is able to fuse the spatial resolution of RGB and/or multispectral sensors with the rich hyperspectral information available from EnMAP. Spectral analyses and geological interpretations of the resolution enhanced data confirm that meaningful spectral and spatial information was added during this fusion process, allowing better discrimination of geologically relevant features (including bedding and dykes). We also speculate that in many (arid) regions our resolution enhancement approach could help to separate bare-earth and vegetation spectra to enable more accurate geological (and vegetation) mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unravelling the Evolution of Alluvial Fans in the Northern Sultanate of Oman: Applications of Remote Sensing and Deep Learning

Authors: Andrea Pezzotta, Lukas Brodsky, Mohammed Al Kindi, Michele Zucali, Andrea Zerboni
Affiliations: Dipartimento di Scienze della Terra “A. Desio”, Università degli Studi di Milano, Department of Applied Geoinformatics and Cartography, Charles University, Earth Sciences Consultancy Centre
Alluvial fans are distinctive fluvial landforms that develop at the abrupt widening of mountain valleys due to decreasing slope. They can occur in various environments and geomorphological settings (Ventra & Clarke, 2018). Their formation is influenced by the interplay between tectonics and climatic conditions. Alluvial fans are conical landforms formed along a drainage system where a topographic gradient exists and deposition prevails over erosion. In such context, a fan results from aggrading sediments that spread out from a sedimentary source through multiple channels radiating from the apex, which may shift over time (Blair & McPherson, 2009). The activation/deactivation of channels ultimately depends on the local hydrological regime. As a consequence, alluvial fans preserve evidence of various generations of inactive channels (paleochannels), lateral migrations of active streams, and the establishment of relative chronology based on their geometrical relationships. In the northern Sultanate of Oman, the southern margin of the Al-Hajar Mountains is flanked by extensive alluvial fans, forming vast and coalescing bajada-type landforms. The semi-arid to arid climate of the region, combined with the almost complete absence of plant cover, allows the observation of the bare surface of alluvial fans, which exhibits intricate and complex patterns of (paleo-)drainage systems. The latter consist of a series of exhumed gravel ridges, representing alluvial fan systems with ages ranging from the Miocene to the Pleistocene (Blechschmidt et al., 2009). Notably, previous works (Maizels, 1987, 1990; Maizels & McBean, 1990) has identified up to 14 generations of paleochannels along the alluvial fan at Barzaman. The current availability of multispectral high-resolution SPOT 6 and 7 satellite imagery offers unprecedented opportunity to investigate these paleochannel systems across an area of approximately 1780 km2. Understanding these landforms is critical for reconstructing past hydrological conditions of the Barzaman alluvial fan, which occurrence at the foothills of mountain belt reflects the role of climatic and tectonic influence on its evolution. However, manual mapping is time-consuming, and the subjective interpretation of fluvial features and paleochannels can lead to inconsistencies, characterised by different level of generalisation, in the assessment of hydrological history and landscape evolution. To address these challenges, we propose implementing Machine Learning/Deep Learning techniques to automate the detection of fluvial features and the tracing of paleochannel paths, thereby enhancing mapping accuracy and consistency, ultimately facilitating a better understanding of alluvial fan dynamics. Preliminary results involve the manual mapping of paleochannels and fluvial features in a portion of the area using 4-bands SPOT satellite imagery, with a resolution of 6 m. This approach enables the recognition of several generations of paleochannels based on geometric relationships, including intersections and overlaps. The mapped area serves as dataset for developing algorithms that leverage Deep Learning techniques, specifically employing Convolutional Neural Networks (CNNs). This step allows for the production of a probabilistic map for identifying paleochannel systems, differentiating between paleochannels and the underlying substrate. CNN models are particularly effective for the segmentation of alluvial fans from multispectral imagery due to their capacity to capture both spatial and spectral features, which are essential for accurate identification and mapping. CNNs are capable of learning the spatial context, given the sequence of convolution operators, that distinguishes paleochannel systems from other landforms. Furthermore, they have the capacity to generalize effectively across diverse geographic regions and environmental conditions. These characteristics enable the model to automate the segmentation process over extensive datasets. In particular, a U-Net architecture (Ronneberger et al., 2015) was selected for the purpose of alluvial fan segmentation. The U-Net model's encoder-decoder architecture is particularly well-suited for image segmentation tasks, where the accurate delineation of boundaries is of paramount importance. The U-Net model is capable of capturing both low-level and high-level features due to the presence of contracting (encoder) and expansive (decoder) paths, which progressively reduce and then recover spatial resolution. A comprehensive learning pipeline (PyTorch) was developed based on the U-Net architecture, which entails the mapping of fluvial landforms in analogous environmental contexts. In the specific, the preliminary dataset comprising 387 multispectral images, of which 270 images were augmented and allocated for training over 50 epochs, and 117 images were used for testing. The model's performance was evaluated using accuracy metrics, achieving 93% accuracy on the testing set. This research is expected to yield a high-resolution geomorphological map of the Barzamani alluvial fan and develop Deep Learning algorithms for the automated tracing and identification of fluvial and geomorphological features. The development of this approach will enhance the potentiality for remote mapping of alluvial fans and other fluvial landforms in semi-arid and arid environments, in order to significantly advance the understanding of alluvial fan dynamics and provide valuable insights for future research in fluvial geomorphology. We kindly acknowledge the support of ESA for providing the access to the SPOT imagery through project PP0100418 (A. Pezzotta), as well as to the Erasmus + Traineeship 2024/25 grant for enabling A. Pezzotta the opportunity to conduct this research at Charles University. References • Blechschmidt, I., Matter, A., Preusser, F., & Rieke-Zapp, D. (2009) - Monsoon triggered formation of Quaternary alluvial megafans in the interior of Oman. Geomorphology, 110, 128–139. https://doi.org/10.1016/j.geomorph.2009.04.002. • Maizels, J.K. (1987) - Plio-Pieistocene raised channel systems of the western Sharqiya (Wahiba), Oman. In: Frostick, L., & Reid, I. (eds), 1987, Desert Sediments: Ancient and Modern. Geological Society Special Publication, 35, 31-50. • Maizels, J.K. (1990) - Raised channel systems as indicators of palaeohydrologic change: a case study from Oman. Palaeogeography, Palaeoclimatology, Palaeoecology, 76, 241-277. • Maizels, J.K., & McBean, C. (1990) - Cenozoic alluvial fan systems of interior Oman: palaeoenvironmental reconstruction based on discrimination of palaeochannels using remotely sensed data. In: Robertson, A.H.F., Searle, M.P., & Ries, A.C. (eds), 1990, The Geology and Tectonics of the Oman Region. Geological Society Special Publication, 49, 565-582. • Ronneberger, O., Fischer, P., & Brox, T. (2015) - U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:1505.04597. • Ventra, D, & Clarke, L.E. (2018) - Geology and Geomorphology of Alluvial and Fluvial Fans: Terrestrial and Planetary Perspectives. Geological Society of London, 440. https://doi.org/10.1144/SP440.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Mineral Identification Through Image Super Resolution (SIR) Methods: A Case Study in Kosovo

Authors: Martyna Durda, Bartosz Skóra, Zuzanna Słobodzian, Jakub Sumara, M.Sc Katarzyna Adamek
Affiliations: AGH University of Krakow, Department of Geoinformatics and Applied Computer Science
The integration of remote sensing and image enhancement technologies has the potential to revolutionize mineral identification and geological mapping. This study investigates the application of image super resolution (SIR) methods to improve the accuracy of satellite imagery for mineral identification in Žegovac Mountains of Kosovo. The project aims to address the challenges posed by the low resolution of satellite data, which can limit the precision of geological and various data interpretations, by utilizing advanced SIR models to enhance image quality. The methodology encompasses three key steps. First, a selection of SIR models will be applied to satellite imagery to enhance spatial resolution and preserve spectral integrity. Second, spectral curves obtained from the analysis of in situ rock samples in Kosovo and by hyperspectral camera, will be compared to those derived from the SIR-enhanced satellite images. This comparison will validate the effectiveness of SIR methods in accurately reproducing mineral-specific spectral signatures. Finally, areas of occurrence for specific rock types will be depicted on a map of Kosovo, showcasing the potential of SIR-enhanced satellite imagery in supporting mineral exploration efforts. The primary objective of developing the SIR model is to generate synthetic high-resolution imagery from Sentinel-2 data. Acquiring high-resolution satellite images is often prohibitively expensive and limited in availability, posing challenges for widespread application in geoscientific research. This study seeks to address this limitation by training the SIR model on a limited dataset of high-resolution satellite imagery, enabling its application to freely available low-resolution Sentinel-2 images. This approach aims to enhance spatial resolution and detail while maintaining the accessibility and cost-effectiveness of utilizing open-source satellite data. Preliminary results suggest that SIR methods significantly improve the spatial clarity of satellite images without compromising spectral data, enabling more precise mineral identification. The integration of SIR with hyperspectral analysis could lead to the identification of new mineral deposits, optimize mineral extraction processes, and support environmental protection initiatives by minimizing exploration-related disturbances. This study contributes to the session themes of innovation in Earth observation and its application to geoscience, mining, and environmental management. The results highlight the value of advanced image processing techniques for operationalizing decision-making in resource management and sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unlocking Hidden Treasures from Above by Hyperspectral Imaging across Scales – Impact of Increased Spatial Resolution on Mineral Mapping Accuracy –

Authors: Thomas Bahr, Dr. Friederike Koerting, Dennis Adamek, Dr. Daniel Schläpfer
Affiliations: NV5 Geospatial Solutions GmbH, Norsk Elektro Optikk AS, ReSe Applications LLC
After the launch of multiple spaceborne hyperspectral instruments, such as the European EnMap and PRISMA missions, providing free spaceborne data to a range of applications, the quality and usability of the acquired spaceborne hypercubes can be compared against existing airborne or drone-based hyperspectral imaging data. This study shows the impact of scale, mainly due to different spatial resolutions on mineral mapping results. Three datasets from hyperspectral sensors, namely the spaceborne EnMAP sensor (30 m spatial resolution), the airborne AVIRIS-NG sensor (2.9 m spatial resolution), and the drone-based HySpex Mjølnir VS-1240 and S-620 VHR sensors (0.1 m spatial resolution) are analysed for mineral surface cover. All datasets were collected over Cuprite Hills, Nevada, USA. Cuprite is known for its distinct geologic features showing hydrothermal alteration minerals at-surface and has been the subject of extensive spectral imaging campaigns. The site was and is used as a validation site for different instruments and the data acquired over the site is frequently used to test and validate processing routines and mapping algorithms. The Cuprite Hills are characterized by extensive hydrothermal alteration and exposed terrain, making it a prime location for studying argillic and advanced argillic alteration zones and dominating minerals such as Kaolinite and Alunite. Several hydrothermal alteration minerals display distinct spectral features in the 0.4-2.5 μm wavelength region that allows the detection and mapping using airborne, spaceborne, and drone-based hyperspectral imagery. In this presentation, new evaluation results of the above hyperspectral data will be presented, based on the scientifically proven ENVI technology. All datasets represent surface reflectance in cartographic geometry (Level-2A products) and have been processed comparably allowing the assumption that spatial resolution is the main contributor to changes in mineral classification products of the surface. While AVIRIS-NG and EnMap cover approximately the same area over the Western Cuprite Hills, the HySpex drone-based dataset covers a smaller subset of that area. Ground truthing, detailed spectral analysis and endmember spectra by G. Swayze et al. (2014) are used as a reference to validate mapping results. Prior to endmember extraction and classification, Minimum Noise Transform (MNF) technique was applied to map differences in the data variance as a function of the variable surface mineralogy. Increasing differentiation, from mineral alteration types to the mineral species level, is achieved with increasing spatial resolution. For the extraction and classification of the mineral endmembers the ENVI spectral hourglass procedure was used. This processing scheme consists of noise suppression and dimensionality reduction using the MNF transformation, determination of endmembers with the Pixel Purity Index method, extraction of the endmember spectra by n-dimensional scatter plotting, and their identification using spectral library comparisons. The image-derived endmember spectra are then used as input to various whole-pixel classification algorithms as Spectral Feature Fitting (SFF) and to spectral mixture analysis in the subpixel domain as with Mixture Tuned Matched Filtering (MTMF). Successful mineral mapping with hyperspectral data is often dependent on the ability to differentiate endmembers from the data. While several mineral endmembers can be identified and are validated by ground-truthing, we focus on the detection of the Alunite endmembers as a proxy of advanced argillic alteration zones. Comparing the classification results of various classifiers (whole-pixel and spectral unmixing) for a suite of in-scene Alunite endmembers shows an increase in detail from EnMAP to AVIRIS-NG and HySpex imagery as a function of their spatial resolution. Nevertheless, at Cuprite, EnMAP data can be used successfully to detect abundances of Alunite minerals with good correspondence to the ground truth-derived mapping. Overall large-scale alteration patterns mapped across the three datasets are in accordance with each other, though smaller patterns are lost with decreasing spatial resolution. Accurate detection of alteration minerals such as Alunite from spaceborne hyperspectral imagery as EnMAP shows the potential of this technology, especially considering that data is available free of charge over many areas globally interesting for mineral exploration. The results show the value of mineral mapping by spaceborne hyperspectral imagery to identify areas of interest but also that higher-resolution data is necessary for detailed analysis via airborne or drone-based data for a precise local investigation of mineral patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detailed Geological Mapping of the State of Qatar at Various Mapping Scales by Combining Multi-Spectral Sentinel-2 Imagery with Very High Spatial Resolution Pleiades Imagery

Authors: Charalampos Kontoes, Martha Kokkalidou, Katerina-Argyri Paroni, Nikolaos Stasinos, Stavroula Alatza, Constantinos Loupasakis, Katerina Kavoura, Dimitris Vallianatos, Dorothea Aifantopoulou, Alper Gürbüz, Ökmen Sümer, Ismat Sabri, Yassir Elhassan, Ali Feraish Al-Salem, Ali Anmashhadi, Elalim Abdelbaqi Ahmed, Umi Salmah Abdul Samad
Affiliations: National Observatory of Athens, Institute for Astronomy and Astrophysics, Space Applications and Remote Sensing, Center BEYOND for EO Research and Satellite Remote Sensing, National Technical University of Athens, School of Mining and Metallurgical Engineering, Laboratory of Engineering Geology and Hydrogeology, EDGE in Earth Observation Sciences, Ankara University, Faculty of Engineering, Department of Geological Engineering, Dokuz Eylül University, Faculty of Engineering, Department of Geological Engineering, STS Survey Technologies, Ministry of Municipality
This paper presents a comprehensive approach to surficial geological mapping of the State of Qatar and selected areas at multiple scales, leveraging High-Resolution (HR) Sentinel-2 and Very High-Resolution (VHR) Pleiades satellite imagery through the integration of remote sensing techniques and geospatial technologies, underpinned by ground-truth data from extensive field surveys. Sentinel-2 multispectral imagery (10-meter spatial resolution) enabled broad-scale geological mapping at a 1:100,000 scale, while Pleiades RBG-NIR imagery (0.5-meter spatial resolution) supported detailed mapping in critical areas at scales of 1:50,000 and 1:20,000. Due to the size of individual VHR Pleiades images, mosaicking of the imagery was performed using GDAL to reassure seamless processing. Resampling the imagery to a greater pixel size (2-meters) was necessary for the analysis, while the spectral resolution was preserved in its original bit depth, ensuring no loss of radiometric accuracy. The high spatial resolution of the acquired imagery enabled the identification of five primary geological formations in the surface –previously documented according to literature-, Rus, Dammam, Dam, and Hofuf, along with Quaternary deposits. The enhanced delineation of boundaries of the surficial exposure of these geological formations and sediments, for the first time in such precision, was possible, as the spatial and spectral resolution of the acquired imagery was higher. This multiscale mapping approach was informed by ground-truth data collected across diverse field locations, which helped capture the range of spectral signatures and validation sample data within each formation. All data were imported into a geospatial database, enhancing data integration and facilitating sharing with the end user. Thus, a coherent integration with GIS software and tools is enabled, improving accessibility and interoperability for decision-makers and researchers. The classification process initially employed a combination of unsupervised clustering and Principal Component Analysis (PCA) to highlight spectral distinctions among formations, providing a first assessment of surface exposures. To enhance the spectral contrast between classes and enable precise formation identification, spectral indices—such as the Geology Index and mineral indices—were calculated, forming an extended feature space. Fieldwork conducted by geological experts facilitated the correlation of remote sensing data with geological structures and formations. These field data were then refined both spectrally and statistically, extracting distinct spectral signatures to support highly accurate supervised classifications. The supervised classification utilized the Maximum Likelihood Classification (MLC) algorithm, selected for its proven ability to effectively differentiate complex geological features. Following classification, GIS-based filtering techniques were applied to refine the results, removing noise and correcting minor misclassifications. This refinement led to high-accuracy final geological maps, validated by geological experts through blind photointerpretation and intensive and targeted field visits. Accuracy assessments were conducted by the means of confusion matrices to quantify key metrics such as overall accuracy, producer’s accuracy, user’s accuracy, and the kappa coefficient, demonstrating the reliability of the classification results at each mapping scale exceeding the accuracy of 90%. In the areas which were selected for detailed mapping at the scales of 1:50000 and 1:20000, VHR imagery from Pleiades mission was utilized. The increased spatial resolution offered by Pleiades enabled even more precise boundary delineation, despite some limitations in band availability for spectral index calculation. The geological maps produced at various scales will be securely hosted within NOA/BEYOND’s ArcGIS Enterprise installed on our premises and can be retrieved through the ArcGIS Enterprise API. This API provides a robust framework for securely accessing, managing, and analysing spatial data through RESTful services. It supports CRUD operations on hosted layers, advanced spatial analysis, and map visualization services with custom symbology. With secure token-based authentication and OAuth 2.0, as well as advanced querying capabilities, the ArcGIS Enterprise API ensures efficient and scalable geospatial solutions tailored to the needs of managing and accessing geological maps. The overall methodology highlights the effectiveness of combining HR Sentinel-2 images (depicting the entire Qatar on only 4 scenes) for classifying and then guiding, equally successful, the classification of the numerous VHR Pleiades images (> 750 scenes) in order to a detailed geological map deliver over the state of Qatar at various scales. The approach provides a strong foundation for further stratigraphic, sedimentological, structural, and mineral exploration, facilitating resource management across Qatar and establishing a model for comprehensive geological assessment in similarly complex and arid regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: B.01.02 - POSTER - Earth Observation accelerating Impact in International Development Assistance and Finance

In this session, attendees will delve into an impact-oriented approach to accelerating the use of Earth Observation (EO) in support of international development assistance, incl. integration in financing schemes. Presenters will provide in-depth insights into real-world application use cases across multiple thematic domains, implemented in developing countries in coordination with development and climate finance partner institutions. The session will prioritise examples showcasing the tangible impact on end-users in developing countries and the successful uptake of EO products and services by their counterparts. Counterparts here can be national governments or International Financial Institutions (IFIs), such as multi-lateral development banks (World Bank, ADB, IDB, EBRD) and specialised finance institutions (e.g. IFAD), as well as and Financial Intermediary Funds (FIFs), most specifically the large global climate and environment funds (GCF, GEF, CIF, Adaptation Fund). Attendees can expect to gain valuable insights into how the process of streamlining EO in development efforts is (1) opening new market and operational roll-out opportunities for EO industry, and (2) translating into impactful change on the ground and driving sustainable development outcomes worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Harvesting Earth Observation for Belize: Transforming Financial Strategies for Climate-Resilient Agriculture.

Authors: Koen De Vos, Kasper Bonte, Melissa Brown, Koen Van Rossum, Laurent Tits
Affiliations: VITO, WorldBank
Belize’s agricultural sector is paramount for the national economy with the export of sugarcane, citrus, and banana products contributing to almost 5% of the country’s GDP. Meanwhile, this industry has increasingly been impacted by droughts in key production regions – and is threatened to be impacted even more and more harshly because of climate change. To address these challenges, the Government of Belize and the Ministry of Agriculture, Food Security and Enterprise (MAFSE) launched the Climate Resilient and Sustainable Agriculture Project (CRESAP), focusing on enhancing productivity and climate resilience. A key component of CRESAP is the expansion of the Belize Agriculture Information Management System (BAIMS) to make it a platform that supports evidence-based decision-making for farmers, policymakers, and financing institutions. The ESA’s Global Development Assistance (GDA) programme focuses on targeted Agile EO Information Development applied to thematic priority sectors, such as agriculture. It has been instrumental in supporting CRESAP by delivering Earth Observation (EO) services tailored to Belize’s needs – which are perfectly fit to be integrated into BAIMS. Through collaborations with the WorldBank, local experts, and industry representatives, we have developed a suite of EO-based products that support climate risk assessments, sustainability monitoring, and strategic financial planning in a data-driven manner. The integration of these products into BAIMS particularly simplifies the workflows for financial institutes supporting climate-smart agriculture by bundling all relevant information in one central, accessible place- thereby allowing these financial institutions to integrate the use of EO into their workflows. Tailored to the needs of the export-heavy industry, a crop mapper was developed for sugarcane, citrus, and banana production areas at 10m resolution in leading cash-crop production regions. For this, we used a combination of high-resolution Sentinel-1 SAR and Sentinel-2 optical imagery and fine-tuned existing crop type classification algorithms (e.g., WorldCereal) to a Belizean context. Through analyzing multiannual composites from 2020 to 2022, we achieved high classification accuracies - and thereby also highly precise maps- of major production zones. Such information can directly be fed into monitoring, reporting, and verification processes essential for institutes providing crop-specific loans and micro-grants. Precipitation variability analysis revealed a sharp divide between northern and southern regions from 2015 onward in terms of meteorological drought occurrence. Southern regions (e.g., Stann Creek, Toledo), which are dominated by irrigated citrus and banana groves, have experienced alterations of wetter and drier years over the last decade. Northern regions (e.g., Orange Walk), which are dominated by rainfed sugarcane fields, have experienced multiple consecutive drier years than average – heavily impacting the stability of sugarcane production. By combining our sugarcane mapping product with the existing Vegetation Condition Index (VCI) from FAO’s Agricultural Stress Index System (ASIS), we identified sugarcane production hotspots that have experienced substantial impact of agricultural droughts- and where investments in irrigation infrastructure or other drought-resilient adaptations are most urgently needed. To comply with the sustainability component of CRESAP, we produced a deforestation map that reveals substantial land cover changes- (e.g., the Orange Walk district losing 20% of its forest cover since 2001). This deforestation can largely be attributed to the expansion of grasslands for livestock. These insights are critical for institutions that assist farmers to allocate resources effectively, while also ensuring that efforts to increase productivity in the livestock sector align with environmental conservation goals. This work demonstrates how the integration of an EO-portfolio into an existing platform can potentially transform climate finance strategies, enable evidence-based investment decisions, and allow for more transparent monitoring, reporting, and verification practices. This collaboration sets a precedent for using advanced EO technologies to secure a climate-resilient future for Belize’s agricultural sector, aligning with international climate finance goals and environmental conservation policies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: How Consistent Are Existing Earth Observation-Based Poverty Prediction Models in Sub-Saharan Africa?

Authors: Reason Mlambo, Dr Sohan Seth, Dr Ian McCallum, Dr Vikki Houlden, Dr Gary Watmough
Affiliations: School of Geosciences, University of Edinburgh, School of Informatics, University of Edinburgh, International Institute for Applied Systems Analysis, School of Geography, University of Leeds
In the last few years, there has been a significant proliferation of machine learning (ML) models that leverage Earth Observation (EO) data for predicting poverty and socioeconomic wellbeing, largely due to the advocating for a ‘data revolution’ by the United Nations to aid the implementation and monitoring of the Sustainable Development Goals (SDGs). This trend is also driven by the limitations of traditional data sources like censuses and household surveys, which are costly, infrequent, and lack the necessary detail for effective policy making and targeted aid. In the past five years, significant progress has been made in developing EO-based ML models for predicting poverty, with ongoing refinements involving variations in statistical methodologies, algorithm complexity and a broader array of EO covariates. However, despite these models utilising the same Demographic and Health Survey (DHS) data for training – albeit in different forms (either harmonised or survey-specific) – they exhibit varied performances and accuracies across different regions. With the 2030 SDG timeline nearing its end and an increased interest in EO data for poverty mapping, it is critical to evaluate the consistency of these models, particularly across sub-Saharan Africa (SSA) which not only suffers from a substantial lack of data but also shoulders the heaviest burden of poverty and socioeconomic inequality. This study provides a quantitative evaluation and comparison of four recent EO-based poverty prediction ML models by Yeh et.al (2020), Chi et.al (2021), Lee and Braithwaite (2022) and McCallum et.al (2022) across 22 sub-Saharan African countries to assess their consistency in estimating poverty trends at the second administration unit level. We found that the four models achieved unanimous agreement in less than 10% of combined administrative units across all countries. In approximately 50% of the units there was agreement in two or three models. The remaining units saw either absolute disagreement or agreement between different pairs of models on varying quintile values. In the pairwise comparisons we observed a consistently positive correlation in the quintile ranks determined by the four models across both rural and urban administrative units in all countries combined, with no significant differences in the Spearman's correlation coefficients among the different model pairs. However, a visual analysis of the poverty maps at the second administrative unit level by the four models highlighted significant variation in spatial patterns of wealth quintiles across the 22 countries. This discrepancy was further confirmed by the overall spatial agreement scores which were relatively lower than the correlation coefficients across all model pairs. While the models of McCallum and Lee showed the highest proportion of spatial agreement, and those of Yeh and Lee the lowest, no pair of models consistently achieved high agreement scores throughout the 22 countries. Moreover, comparisons with the latest DHS wealth quintiles in each country showed that no single model consistently ranked in the top or bottom five for spatial agreement across all countries. Additionally, while no country consistently showed high or low spatial agreement scores across all model comparisons, Mozambique, Uganda, and Rwanda frequently ranked in the top five for spatial agreement in at least half of the pairwise assessments for both unstratified and rural units. Conversely, Sierra Leone, Benin, and Mali often appeared in the bottom five in these assessments. Although the four models show varying degrees of alignment in their poverty assessments across different countries, it is apparent that no single model or pair of models consistently achieves spatial agreement. This variance underscores the complexities of using 'global' models to predict poverty in diverse geographic landscapes without taking local contexts into account. These models typically assume a uniform relationship between assets and wealth across varied social contexts, but this assumption often conflicts with empirical evidence. Additionally, the black-box nature of most of these models prohibits a deep understanding of the factors driving poverty, as they rely heavily on unexplained features with obscure links to the underlying drivers of poverty. The observed variations make it clear that these models need further refinement. We hope our assessment will prove useful in guiding future enhancements in this field.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Earth Observation-Driven Parametric Flood Insurance for Enhancing Climate Resilience

Authors: Paolo Tamagnone, Chloe Campo, Guy Schumann, Alice Castro, Maria Mateo Iborra, Konrad Jarocki, Dipankar Munshi
Affiliations: Research and Education Department, RSS-Hydro, School of Science (Geospatial Science), Royal Melbourne Institute of Technology (RMIT) University, Ibisa Network
Climate change-exacerbated extreme weather events, particularly floods, and unrestrained urbanisation pose significant risks to communities, economies, and human lives worldwide. Fluvial floods caused by river overflows, coastal floods caused by storms or high tides, and pluvial floods exacerbated by overburdened drainage systems are some of the ways in which floods can cause significant material damage to infrastructure, supply chains, and operations. Traditional insurance models often struggle to adequately address these increasing risks, particularly in regions with limited historical data and complex hydrological systems. This research explores the potential of Earth Observation (EO) data to revolutionize flood insurance by enabling the development of innovative parametric insurance products. The proposed solution aims to leverage EO data throughout the entire insurance product lifecycle, from design and development to operational execution, and exploit advanced EO techniques to improve flood detection and quantification. By integrating satellite imagery and geospatial datasets, the proposed methodology aims to overcome the limitations of traditional stream gauge and inundation model-based methods and provide more accurate and timely information on flood impact. Stream gauges are limited by their inability to measure extreme discharge, funding, and international cross-border data sharing restrictions while accurate flood models require high-resolution topography and are difficult to implement and maintain on a large scale due to costs. Unlike traditional methods, the proposed EO-based solution is designed to be highly scalable and transferable to other regions. This feature is particularly valuable for expanding insurance coverage globally. Moreover, EO data can supplement or reduce reliance on ground-based measurements, modelled scenarios and enhance spatial coverage, reducing costs and improving data quality. Specifically, the project will focus on advanced flood mapping, spatio-temporal analysis, frequency quantification, and data-driven trigger definition. Utilising state-of-the-art of flood mapping algorithms, the project will accurately map flood features, such as extent and depth, from historical and near real-time EO data. These algorithms aim to effectively combine multiple data sources, such as multi-sensor imagery, land cover, topography and population/asset censuses, to improve the accuracy of flood delineation and impact assessment. Time-series analysis techniques will be employed to identify long-term trends and short-term variations in rainfall and flood patterns. The enhanced understanding of flood dynamics will enable the development of a more accurate trigger definition and a more robust flood insurance product. This framework will lead to the development of a data-driven parametric insurance product, triggering payouts based on predefined EO-derived flood parameters. By leveraging large amounts of reliable EO data, insurers can make more informed decisions about underwriting, pricing, and risk management. Creating a repository of standardised and reliable EO data enables (re)insurers to price the risk, paving the way for insurance product offerings. Additionally, the short latency of EO data will facilitate the insurers to make swift payouts after the floods. The combination of the selected datasets will enable the development of a versatile product that can be tailored to various flooding scenarios, from localized pluvial floods to extensive riverine floods. This versatility is crucial for comprehensive risk assessment and adapting the insurance coverage to diverse geographical contexts. By advancing the frontiers of EO-based flood parametric insurance, this research contributes to the development of innovative evidence-based climate finance strategies by providing a robust and transparent tool that aims to accelerate disaster recovery, promote sustainable development, create more resilient communities, and foster long-term financial stability in the face of increasing flood risks. To further enhance the effectiveness of the proposed approach, future research directions include incorporating additional high-quality data sources, developing advanced data-fusion algorithms, addressing data and meteorological challenges, and collaborating with policymakers, insurers, and communities to co-design and implement effective flood risk management strategies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Urban Sustainability Index: Leveraging Earth Observation to Benchmark Environmental Performance at the City-Level Worldwide

Authors: Alexander Stepanov, Raghavan Narayanan, Zélie Marçais
Affiliations: European Bank For Reconstruction And Development
Abstract: The presentation will introduce the Urban Sustainability Index (USI), a new city-level index of environmental and economic performance developed by the European Bank for Reconstruction and Development (EBRD) Impact team. Promoting a low-carbon transition stands at the core of the EBRD's strategic priorities, with a focus on sustainable infrastructure investments spanning energy efficiency, transportation, climate resilience, and more. Central to these efforts is the Bank’s commitment to empowering cities within its countries of operation to tackle local environmental challenges and catalyze green investments. In light of the scope and scale of these investments, the EBRD’s Impact team faces the widely shared challenge of capacity constraints, inconsistent methodologies, and subjectivity of local data collection on socio-economic and environmental conditions to guide and monitor impactful investments. Thus, supporting rigorous, cost-effective data collection on local conditions can empower impact investors to better target their interventions, reduce the burden on local stakeholders, and enable comprehensive, comparable performance analyses over time. Similarly to other Multilateral Development Banks (MDBs) active in the urban sustainable infrastructure sector, benchmarking the environmental performance of localities to increase the effectiveness of interventions is particularly relevant in the EBRD’s countries of operations, where cities continue to grapple with challenges such as deteriorating air quality, unchecked urban sprawl, inefficient legacy buildings, inadequate water and waste management, and limited institutional capacities to address these issues effectively. As such, the USI was developed by the EBRD’s Impact team as a composite multi-layer index covering the dimensions of urban environmental asset quality, efficient resource use, climate risks and socioeconomic benefits. Each dimension consists of one or multiple indicators. In total, the USI uses 15 primary indicators; of which 11 are city-level and 4 are country-level. These include yearly average concentrations of air pollutants (e.g. PM2.5, NO2, SO2), CO2 emissions, maximum vegetation, frequency of heatwaves, flood risks, average night-time lights intensity, total public transport infrastructure availability, and so on. To achieve global coverage and ensure verifiability and transparency of the scores, the Impact team collected publicly available earth observations (CAMS, Sentinel 5P, MODIS, etc…) and other geospatial data sources (e.g. OpenStreetMap). The index has been calculated annually from 2015 to 2024 for more than 13 thousand localities in 164 countries and should be revised every year. To ensure a consistent treatment of cities despite national variations in defining a city’s borders, the index utilises the concept of an Urban Centre as proposed by a consortium of international organisations and adopted by the UN Statistical Commission. Here, urban centre boundaries are determined through the observation of built-up spaces, total population size and population density rather than official administrative borders. The construction of the USI followed the steps described in the OECD (2008) guide on constructing composite indicators: (1) Theoretical framework, (2) Selection of variables and data sources, (3) Imputation of missing data, (4) Multivariate analysis, (5) Normalisation, (6) Weighting and aggregation, (7) Robustness and sensitivity checks. Each step involved choices between different possible methods and parameters, which were made in accordance with the best practice in economic literature and considering data availability, purpose, and scope of the intended analyses. Imputation of missing data was performed solely within each city-specific time series, and all variables were log-normalised and windsorised to correct for outliers. Finally, all indicators were aggregated uniformly across all sub indicators. The robustness of the USI was explored via a sensitivity analysis, ensuring the index scores were (a) robust to using alternative weighting schemes that either place more (double) or less (half) weight on any given indicator, and (b) not driven by a single sub indicator. For this purpose, various alternative specifications of the USI were computed. To assess the similarity of the resulting outcomes with the baseline specifications, both Pearson correlation coefficients and Spearman correlation coefficients were considered. Through this novel index, the Impact team of the EBRD offers a new tool for the Bank’s operating teams, impact investors, policy makers, civil society organisations, and academia to (i) identify challenges faced by cities worldwide in a comparative fashion, (ii) target and expand new interventions in sustainable urban development, and (iii) monitor changes in cities' environmental performance over time. The EBRD currently applies the USI to assess the long-term impact of its investments in municipal infrastructure across multiple countries. The USI can be expanded in the future to capture other critical environmental dimensions, e.g. water assets, waste, and wastewater management. The index can have a wide-ranging use for urban planning, biodiversity designs, and providing snapshots of systemic change or market effects. Key words: Urban Sustainability, Infrastructure, Earth Observation, Air Quality, Transportation, Biodiversity, Index Benchmarking, Urban Environment, Multilateral Development Bank, Impact.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The MAPME Initiative - A Cross-Institutional Community for Reproducible Geospatial Data Analysis

Authors: Darius Andreas Görgen, Dr. Johannes Schielein
Affiliations: MAPME Initiative, University of Münster, KfW
Earth Observation (EO) and geospatial technologies can help us to take informed decisions to allocate public funds responsibly to maximize impact and benefits for our societies and the natural environment. A sustainable uptake of analytical geospatial solutions requires us to base our decisons on accessible and reproducible evidence. To address this, we worked on prototyping open-data projects that illustrate the power and usefulness of EO technologies within development aid projects in the context of the MAps for Planning, Monitoring and Evaluation (MAPME) initiative (www.mapme-initiative.org). It is an open, cross-institutional community of practice focusing on knowledge exchange between organizations both in developing and donor countries. We focus on open standards and software to harness EO data for all phases of the project cycle. A major outcome of this initiative is the R software "mapme.biodiversity" (https://CRAN.R-project.org/package=mapme.biodiversity). This tool streamlines reproducible data analysis within our organizations and beyond by supplying efficient routines to handle a diverse set of geospatial data sources. Because it is free and open source software (FOSS), it serves as a common platform to share knowledge between our members thus significantly reducing duplication of efforts. We will show that the software framework can be easily deployed on local machines, on-prem servers or even on cloud-computing environments. The tool is thus a good fit to conduct reproducible analysis for projects with diverse budgets and skillsets. We will showcase how the framework was successfully used by KfW to produce a comprehensive data base which contains more than 1,000 protected areas currently supported by the German Development Cooperation (both KfW and GIZ). The database allows easy access for decision makers to align conservation efforts with conservation policy targets. We also show how the framework can be used to estimate potential impacts of project activities on local ecosystem integrity as required in disclosure reports. Finally, we will share success stories of transferring the methodological approach to researchers from the Global South to harness the framework within their respective institutions. Our presentation will highlight the importance of developing a cross-institutional community approach to building up EO capacities in development cooperation as well as the importance of focusing on closing the (still considerable) gap between EO data providers and decision makers in their daily work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Transforming Forest Monitoring for Climate Finance and Carbon Conservation in Coffee Landscapes

Authors: Michelle Kalamandeen, Katja Weyhermüller, Johannes Pirker, Merga Diyessa, Girma Ayele, Heiru Sebrala Ahmed
Affiliations: Unique Land Use Gmbh, Farm Africa Ethiopia, Environment, Forest and Climate Change Commission
Effective climate finance strategies demand high-quality data to tackle the interlinked challenges of climate change mitigation and sustainable development. In Ethiopia, coffee-growing regions face increasing pressures from population growth and agricultural expansion, resulting in forest degradation, biodiversity loss, and elevated carbon emissions from deforestation. Ethiopia has successfully unlocked climate finance opportunities to reduce emissions in other subsectors, but has been lacking a scalable methodology to quantify the impact of forest degradation. This study investigates the integration of artificial intelligence (AI) and Earth observation technologies as a scalable and cost-efficient approach to forest monitoring and carbon management for carbon market schemes such as Architecture for REDD+ Transactions and The REDD+ Environmental Excellence Standard (ART-TREES). Leveraging Sentinel-2 satellite imagery and advanced neural network models, we evaluated forest health through various vegetation indices to monitor forest degradation trends from 2020 to 2023, quantify biomass dynamics, and estimate CO2 emissions and sequestration. Results demonstrate that rejuvenated coffee plots showed stabilization or growth in biomass, reflecting the effectiveness of conservation efforts, while unmanaged plots displayed variable outcomes. Post-2021 recovery of coffee agroforestry plots which followed improved management systems significantly boosted carbon sequestration, reaffirming the pivotal role of agroforestry in climate change mitigation. This research underscores the potential of integrating AI with Earth observation to improve the precision and scalability of forest and agriculture monitoring systems. Such advancements are essential for generating actionable insights to inform climate finance strategies, hence catalyzing targeted interventions, and enhance resilience of the agroforestry domain, contributing to sustainable development and low-carbon pathways in vulnerable landscapes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Leveraging EO uptake through GDA FFF

Authors: Andreas Walli, Alexander Kreisel
Affiliations: Geoville
The Fast EO Co-financing Facility (FFF) is a cross-cutting activity in the GDA programme that aims to provide rapid support to International Financial Institutions (IFIs) and Official Development Assistance (ODA) entities with a focus on leveraging co-financing for additional EO related capacity building and skill transfer or geospatial analysis. The FFF does not have a thematic focus but rather provides support on any topic in a pre-defined time and budget framework, on the condition that the recipient of support can demonstrate capacity for alignment. At this stage, multiple IFIs and an ODAs have been engaged, most prominently World Bank and the Asian Development Bank, but also the European Bank for Reconstruction and Development, Inter-American Development Bank, Kreditanstalt für Wiederaufbau (KfW Germany), The French Agency for Development (AFD) and European Investment Bank. In this session, we want to showcase successful IFI support though the FFF that has led to EO uptake and co-financing, benefitting European industries in entering a highly competitive market. These success stories will be accompanied by valuable lessons learned from less successful engagements and support actions highlighting strategies and markers in engagements to early identify both capacity and willingness to use and acknowledge the benefit from EO. Well over 20 requests for support from various IFIs have been evaluated so fair, with many more to come. EBRD is very recent addition to ESA’s IFI cooperation and partnership and the FFF lies the pathway to further future collaboration, particularly by supporting their Green Cities network. World Bank has already proven willingness to co-finance by providing complementing funds to enlarge the mapping area for seagrass in the Red Sea. With ESA aim to significantly enlarge the network of cooperating IFIs, the FFF plays a key role in facilitating the first steps through fairly easy entry points. Continuous communication and a clear management of expectation is key to a successful start of the collaboration that ensures uptake and willingness to contribute.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating the Environmental Impact of Sand Dams in Semi-Arid Regions Using Multi-Scale Earth Observation Data

Authors: Dr. Andreas Braun, Dr. Martin Sudmanns, Kibet Nimrod Mandela, Bernhard Ebersbach, Christian Khouri, Niklas Lehr
Affiliations: Eberhard Karls University Tübingen, Paris Lodron University of Salzburg
Increasingly severe water scarcity, driven by climate change and exacerbated in semi-arid regions, threatens ecosystems and communities, particularly those reliant on seasonal water sources. Sand dams—a low-cost, scalable solution for water retention—hold promise as an effective intervention to improve water availability, increase vegetation, and support local resilience (Ryan & Elsner, 2016). However, scientific evaluations of sand dams’ long-term environmental impacts are limited, yielding an uncertain situation regarding their applicability and reducing their visibility and adoption. This study aims to address this knowledge gap by applying an integrated, multi-scale Earth observation (EO) approach to evaluate the impacts of sand dams on local and regional ecosystems across selected African semi-arid areas. By leveraging various EO methodologies, this research assesses the ecological and hydrological impacts of sand dams through a series of analyses designed to capture the nuanced ways in which sand dams interact with their environments over time. Four main analyses form the backbone of this study, each targeting distinct aspects of environmental change across spatial and temporal scales. To isolate the effects of sand dams, we employ a comparative study design in Makueni County, Kenya, where two comparable river catchments were selected: one with several sand dams along a river constructed between 2010 and 2015 and one without. Both catchments share similar river characteristics, hydrological regimes, and climate zones, making them suitable controls for assessing environmental changes specifically attributable to sand dam interventions. This design allows us to distinguish between landscape changes influenced by broader climatic or hydrological trends and those directly resulting from sand dam presence and operation. By systematically comparing test and control catchments, we aim to identify the unique contributions of sand dams to ecosystem resilience, water availability, and landscape dynamics. Radar-based detection of ground deformation: We use Sentinel-1 radar data spanning from 2014 to 2025 to detect patterns of ground deformation in areas with and without sand dams. Through an SBAS interferometry approach (Casu et al., 2014), we aim to identify subsidence or uplift patterns, examining their relationship to the hydrological changes associated with sand dams. This analysis evaluates whether sand dams induce measurable land deformation patterns, either as gradual subsidence or cyclic deformation corresponding to seasonal water retention. Additionally, this assessment helps to determine the spatial and temporal characteristics of any deformation, offering insights into the broader impacts of sand dams on soil stability and groundwater retention. Land surface temperature (LST) analysis: Utilizing high-resolution (10m) LST data from the ConstellR mission (Spengler et al., 2024), we examine temperature variations and anomalies across sand dam regions for the year 2020. ConstellR’s thermal infrared imagery provides seasonal snapshots, allowing us to assess temperature changes over 11 acquisitions throughout the year. This analysis focuses on identifying temperature anomalies that may indicate the cooling effect of sand dams and investigating whether these effects are spatially aligned with specific land cover types, such as vegetated areas that benefit from increased moisture retention. We aim to determine whether the presence of sand dams correlates with reduced surface temperatures, particularly during dry seasons. The investigation of spatial patterns in relation to land cover offers insights into how sand dams might moderate local temperature extremes and indirectly support vegetation health by retaining soil moisture. NDVI time-series analysis for gradual vegetation change: To monitor the gradual impact of sand dams on vegetation dynamics, we use Normalized Difference Vegetation Index (NDVI) data derived from Landsat imagery. Analyzing these data in Google Earth Engine enables the detection of long-term greening or browning trends (Walper et al., 2022) in dam regions, focusing on distinguishing between pre- and post-construction phases of the dams. This time-series analysis assesses whether sand dams contribute to vegetation recovery or expansion, using vegetation health and coverage as indicators of ecological resilience. We aim to attribute observed vegetation trends directly to the presence and maturity of sand dams, especially in areas that demonstrate increased greening over time. This study also examines short-term vegetation responses following significant rainfall events, allowing us to determine how sand dams influence both immediate and enduring vegetation dynamics. Categorical land cover change analysis via Semantic Data Cubes: We apply semantic EO Data Cubes containing semantically enriched Sentinel-2 and Landsat data to categorize time series of land cover changes over extended periods, capturing shifts in land use and land cover linked to sand dam construction. By classifying and tracking changes in categories such as vegetation, water bodies, and bare soil components, this approach allows us to quantify the broader landscape-scale impacts of sand dams, such as increases in agricultural activity and the establishment of new infrastructure like roads. The semantic EO Data Cube framework facilitates the integration of complementary data, such as digital elevation models, and transferability across regions, enhancing the robustness of the land cover analysis (Sudmanns et al., 2021). This framework allows for automated, scalable assessment and the potential to extend findings across other semi-arid regions. Moreover, this categorical analysis will enable us investigate land cover changes based on sand dam construction date as well as associated with broader climatic trends, providing a nuanced understanding of sand dams' role in landscape transformation. These four analytical streams work in concert to create a comprehensive picture of sand dams’ environmental impacts, providing valuable insights for ecosystem management and climate adaptation strategies. In addition, our use of diverse EO methodologies underscores the potential for remote sensing technologies to monitor and evaluate the effectiveness of small-scale water retention systems, which are particularly valuable in regions where ground-based observation is challenging or incomplete. This research contributes to the broader understanding of how low-cost interventions can enhance ecological resilience in semi-arid regions. Specifically, it offers a scientific basis for scaling sand dams as a sustainable water management strategy, supporting both humanitarian efforts and local policy initiatives aimed at reducing vulnerability to climate change. By delivering evidence of sand dams’ ecological benefits, this study provides a foundation for further exploration into similar water retention systems and the potential for these interventions to be adapted across diverse environmental and climatic contexts. Our findings will be particularly relevant for policymakers, NGOs, and environmental managers interested in adopting cost-effective and locally impactful solutions to address water scarcity and bolster climate resilience. Literature Casu, F., Elefante, S., Imperatore, P., Zinno, I., Manunta, M., De Luca, C., & Lanari, R. (2014). SBAS-DInSAR Parallel Processing for Deformation Time-Series Computation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(8), 3285–3296. https://doi.org/10.1109/JSTARS.2014.2322671 Ryan, C., & Elsner, P. (2016). The potential for sand dams to increase the adaptive capacity of East African drylands to climate change. Regional Environmental Change, 16(7), 2087–2096. https://doi.org/10.1007/s10113-016-0938-y Spengler, D., Ibrahim, E., Chamberland, N., Pregel Hoderlein, A., Berhin, J., Zhang, T., & Taymans, M. (2024, März 11). Monitoring land surface temperature from space -constellr HiVE - new perspectives for environmental monitoring. https://doi.org/10.5194/egusphere-egu24-21514 Sudmanns, M., Augustin, H., van der Meer, L., Baraldi, A., & Tiede, D. (2021). The Austrian Semantic EO Data Cube Infrastructure. Remote Sensing, 13(23), 4807. https://doi.org/10.3390/rs13234807 Walper, C., Braun, A., & Hochschild, V. (2022). A Satellite-Based Framework to Investigate the Impact of Sand Dams on Landscapes in Semi-arid Regions. In V. Naddeo, K.-H. Choo, & M. Ksibi (Hrsg.), Water-Energy-Nexus in the Ecological Transition: Natural-Based Solutions, Advanced Technologies and Best Practices for Environmental Sustainability (S. 287–290). Springer. https://doi.org/10.1007/978-3-031-00808-5_66
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Building a Worldwide Coastal Monitoring Capability: EO-derived shoreline data for international collaboration against coastal erosion.

Authors: Anne-Laure Beck, Martin Jones, Professor Ivan Haigh, Dr Salvatore
Affiliations: Argans Ltd, University of Southampton, isardSAT, ACRI-ST
The increasing impacts of coastal erosion driven by climate change and human activities present a critical challenge for sustainable coastal management. The global aspects of coastal processes require a collaborative and trans-country effort to understand and mitigate erosion effects. To encourage joined-up coastal management, decision and policy makers require a reliable and global capacity for coastline monitoring that is easy to use and provides equitable access to facilitate cross-border collaboration and monitoring. After 5 years of research and development, ARGANS coastal processing chain helps deliver a series instantaneous land/sea boundaries derived from earth observation data, corrected and projected to reference tidal level such as Mean Sea Level. Most importantly these data are at temporal sampling rates which allow the effective analysis of specific erosion or accretion phenomena related to events (lunar tides frequency, and before-after all storms, dam releases) and not just at yearly or decadal scales. It is a world first. As demonstrated within the ESA’s GDA disaster resilience programme in Ghana, high-frequency EO-derived information is essential to uncover the multiple causes of coastal change. Thanks to the numerous images now available per year, it was evident that waves alone have not always been the main cause of erosion and so protection measures that are planned need to be mindful of other factors and this can only be understood by a repetitive and regular view of the coast. The production of a 30-years MSL shoreline for the UK has been integrated to the British Geological Survey (BGS) Coastal modelling environment (CoastalME) to produce numerical simulations to support multi-hazard analyses under present and future climate change scenarios. However, spatially accurate MSL shoreline rely on high-resolution additional geospatial data, that are not always available. The UK Space Agency through its Enabling Technology Programme have supported continuation and improvement of ARGANS coastal processing chain, bringing together coastal and oceanography experts to enable the shoreline correction without any support of in-situ data. Combining Synthetic Aperture Radar data with modelled tidal table, the Global Shoreline project investigates the automatic production of EO-derived slope and modelled calculated tidal level to replace in-situ measurement in the shoreline processing which will allow worldwide processing irrespective of the need for in-situ coastal information. The Coastal Sea Level Integrator (CSLI) has been developed to automatically compute accurate heights of sea level at any selected location around the world’s coastline and for any given time, to feed into the GSL processor. Sea level height, at any location or time or along the coast, arises as a combination of: (1) astronomical tides; (2) storm surges; and (3) waves, especially setup and runup, superimposed on relative mean sea level. The slope from a SAR waterline generator, integrates tidal information from the CSLI to match a SAR derived land/ sea boundary with an elevation, producing a costal digital elevation model. There is now the opportunity to accurately map complete regions such as West Africa to the same degree of temporal & spatial accuracy as enjoyed in the UK and Europe. Such high resolution (both temporally and spatially) data provides multi-scale shoreline change information for predictive model and digital twin infrastructures allowing policymakers, planners, and local communities to develop and implement evidence-based strategies for mitigating coastal risks and enhancing resilience.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ol’ Man River - development and growth by decreasing negative impacts

Authors: Kristin Fleischer, Mareyam Belcaid, Elke Kraetzschmar
Affiliations: IABG
The Amazon basin habitats the world’s most biodiverse forest and has a share of 50% of the remaining rain forest worldwide. By covering about 35% of the South American continent in nine countries, the regions host further superlatives such as longest rivers and largest volume of water of river system in the world. Beyond that, its importance as a stabilising factor of the World’s climate is undeniable. But the complex and fragile ecosystem is under stark pressure. Although scarcely populated with a few larger cities and scattered settlements, the living and working environment is expanding. One of the biggest challenges is the deforestation in favour of bioeconomic products & services such as mining, lumbering, agriculture, and livestock, one of the results of the ever increasing global demand and hunger for goods. On top of that, the increasing pressure induced by climate change is taking its toll. Rainy seasons are shortened, and periods of severe drought accompanied by wildfires are more frequent, to name only a few. Despite the ecological richness of the region, local communities and people often life in poverty and basic needs are unmet. The Global Development Goals should serve as a guiding principle for the development of the region. Connectivity as well as access to markets and social services is key for both - bioeconomic products & services and the local population. The World Bank is investing to answer one of the main questions: how can access be gained and connecting networks be developed with simultaneously decreasing the negative impact. The WB team is conducting an analysis of infrastructure gaps by assessing their vulnerability to climate risk in order to support bioeconomy value chain, and supporting greater opportunities for higher incomes, increased productivity, and economic participation for the people of the Amazon Region. Focus areas lie on the • Identification of key infrastructure bottlenecks to productivity growth; • Identification of connectivity and access challenges and potential solutions to improve access to basic services and improved welfare; • Designing a sustainable infrastructure development roadmap for transport connectivity, energy access and digital connectivity. Waterways and related assets such as harbours are a declared alternative to common road networks in the Amazon region. The GDA Transport and Infrastructure team supports the World Bank in their data driven approach for infrastructure planning by enhancing waterway insights through the state-of-the-art analysis of satellite imagery. Only a detailed knowledge of the status quo of the hydrological network and its exposure to seasonal and climate change influences allows to tackle the identified connectivity challenges in an appropriate manner. In close cooperation with the WB team, the developments are focusing on the following four topics. (1) Hydrographic inventory: understanding the accessibility of areas by and the navigability of rivers is essential. Open or governmental data lacking thematic level of detail and geometric accuracy. Information such as network density, river width, sandbanks, or obstacles is widely missing. The inventory is based on Sentinel 1 and Sentinel 2 satellite image data of the dry season. It follows a two stage approach: in a first step a remote sensing analysis of the data is conducted, where the classification considers various indices and spectral bands of the sensor. To densify the results and improve the detectability of narrow rivers and close gaps in the network, advanced technologies such of super resolution and AI is incorporated in a second step. (2) Water course change: the analysis provides the comparison of the water extent, showing changes between two consecutive years from 2019 to 2023. The analysis of seasonal or drought induced changes as well as geomorphological variation (e.g. sedimentation) help to assess their influences on the navigability. (3) River Flood Modelling: helps in assessing flood risks, planning flood management strategies for e.g. harbours, and understanding the hydraulic behaviour of the river system under different flood conditions and its affects on navigability. (4) Boat detection: to determine the number of boats traversing a specific stretch of river, providing valuable insights into the connectivity of access routes. By quantifying the traffic of boats, key waterways and harbours that serve as critical transportation corridors are identified. Additionally, size of boats provides information of river navigability. As the Amazon region is vast it is clear that manual approached as well as the use of VHR image data or even field missions can only be conducted locally, but are impossible to be applied on a larger scale. Otherwise, costs and manpower are exceeding every benefit. EO data of ESA’s Sentinel fleet combined with automatic approaches for information extraction are enrolling its full potential here. The approach provides an opportunity for the WB team to cover large areas and close the current information gaps on their way to a sustainable and targeted infrastructure planning along the hydrological network. It is the aim to build on the developments exemplary showcased in the areas of Colombian Leticia and Brazilian Tabatinga, and to transfer these to other hot spot areas of development that demand sustainable interactions throughout the whole Amazone region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EO data facilitates the global solar energy transition through scaling up solution and collaboration

Authors: Fang Fang
Affiliations: NEO BV
The global energy transition has reached a pivotal juncture, with some nations advancing rapidly in adopting renewable energy sources such as solar and wind, while others continue to face significant challenges. For many countries, the lack of reliable, actionable data and insights remains a primary barrier to accelerating the transition. Earth Observation (EO) data presents a transformative solution, enabling stakeholders to overcome these barriers through informed decision-making and data-driven strategies. In collaboration with the World Bank, NEO has developed scalable EO-based solutions that provide comprehensive insights on rooftop solar potential generation for energy transition efforts in developing countries. These solutions have been successfully implemented in 46 cities across 26 countries, delivering tangible impacts for local governments, private sectors, and communities, facilitating the energy transition movement. Securing Resources for Renewable Energy Projects One of the impactful applications of EO data is in supporting the financing of renewable energy projects. In Lagos, Nigeria, NEO conducted a pilot study assessing the rooftop solar energy potential in the city center. The results revealed significant capacity for solar energy generation, providing the local government with the confidence and data necessary to secure loans from the World Bank. These funds are now being utilized to further integrate solar energy into the city’s energy infrastructure, propelling Lagos toward its renewable energy goals. Similarly, in Sint Maarten, local teams used NEO’s assessment result to identify suitable locations for initiating solar panel installations. The insights enabled stakeholders to prioritize and launch a pilot project on a public building, paving the way for a nationwide solar energy rollout. Empowering Local Stakeholders A key element of NEO’s success has been its focus on stakeholder engagement and capacity building. Collaborations with local governments and institutions ensure that EO solutions are not only just replicable easily to different areas but also aligned with the specific needs and priorities of the regions they serve. Feedback and ground validations from local stakeholders further refine the outputs, enhancing their practical application and support local bank team and stakeholders’ operation. Moreover, all data generated from NEO’s work is made publicly accessible through a user-friendly platform. The platform allows users to view, filter, interact with, and download datasets easily. To date, it has garnered over 40,000 views and 3,000 likes, reflecting widespread appreciation from diverse sectors. Government organizations have used the platform for planning and prioritizing renewable energy projects, while private companies, such as solar panel installers, have leveraged it for market insights. Educational institutions have also benefited, with university students and researchers utilizing the data for further in-depth studies and analyses. Scalable and Adaptive EO Solutions NEO’s approach to scaling EO solutions lies in its use of a deep learning (DL) model, which is iteratively refined to improve its predictive capabilities. This master model is trained on diverse satellite imagery and continuously updated using new data from completed projects. Each project feeds back into the model, enhancing its ability to generate high-quality output for any region. For example, in Dominica, where data availability has historically been limited, NEO collaborated with local planning departments and the World Bank to integrate high-quality Digital Surface Model and Digital Terrain Model from local departments into its workflow. By combining EO data with localized datasets, the model was able to deliver precise and context-specific outputs, ensuring greater relevance and utility for stakeholders. Capacity Building and Training To further scale its impact and speed up the facilitation process for local stakeholders, NEO has partnered with the World Bank to deliver training programs aimed at building local capacity in using EO data for solar energy applications. For example, in South Africa, 77 participants enrolled in a training program that introduced them to the fundamentals of remote sensing, satellite imagery processing and tools, and the applications of EO data in energy transition efforts. The program received overwhelmingly positive feedback, with participants expressing enthusiasm for the subject and a strong desire for more in-depth, onsite training. By empowering local stakeholders with the knowledge and skills to work with EO data, these training initiatives are fostering greater adoption of EO-driven solutions and accelerating progress toward renewable energy goals. Continuous Monitoring for Sustainable Development As energy transition efforts progress, the demand for continuous monitoring services is growing. EO data offers an effective solution for tracking development activities, such as residential construction, solar panel installations, and roof quality assessments. For instance, NEO has been approached often by bank teams and local stakeholders querying on using time series imagery data for providing insightful update on the status and progress of renewable energy projects. This capability ensures that stakeholders remain informed about ongoing developments, enabling timely interventions and informed decision-making. By bridging information gaps, EO-driven monitoring services support the efficient and sustainable rollout of renewable energy initiatives. Driving Impactful Change The use of EO data in energy transition projects has demonstrated clear benefits, not only in accelerating renewable energy adoption but also in creating opportunities for the EO industry. By streamlining EO applications for international development, NEO in joint force with its partners are driving sustainable development outcomes worldwide. Through a combination of scalable solutions, stakeholder engagement, capacity building, and continuous monitoring, NEO’s work exemplifies how EO data can be harnessed to address real-world challenges, especially in developing areas. By providing actionable insights and fostering collaboration, these efforts are paving the way for impactful change on the ground, ensuring that no country is left behind in the global energy transition.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Supporting fragility analyses with generative AI: the GEN4GEO approach to geospatial data exploration in natural language

Authors: Marcello Cinque, Chiara Francalanci, Paolo Giacomazzi, Uli Hab, Paolo Ravanelli, Stefano Rosiello, Michela Corvino
Affiliations: Univ. Of Naples Federico II, Uli Hab, Politecnico di Milano, Cherrydata srl, Critiware, ESA
The goal of the GEN4GEO project is to design a system that uses generative artificial intelligence (gen AI) to enable the exploration and visualisation of geospatial data with an interaction between users and systems based on natural language (either text or speech). The system will be designed with a focus on the fragility of countries or other geographical areas of interest, including environmental, societal, economic, and political implications. We believe that understanding the implications of these phenomena could be greatly helped by enabling domain experts to perform an easier, direct, and interactive exploration of large multi-source geospatial datasets and to self-define their own high-level indicators (e.g. fragility) based on this direct uptake of the practical implications of natural phenomena in their domain of interest. Data exploration is an approach to data analysis aimed at extracting insights incrementally, starting from no or very limited knowledge on available data. In principle, data exploration should be an agile, incremental, and creative process, but, with current dashboards, end users must have too much technical knowledge to explore data without technical support. As a result, data exploration is a team effort and cannot be done by end users alone. Overall, it is slow, difficult, and often unimaginative and dashboards are never used to their full potential. Overcoming these limitations would significantly reduce the barriers to the exploitation of geospatial data in a variety of domains generating market opportunities many EO applications. From this perspective, our use case has practical implications and potential applications in a broad cross-section of industries. The objective of GEN4GEO is to address the limitations of current technology by leveraging the ability of generative AI to enable data exploration with an interaction based on natural language. This conversational interaction is supported by the so-called “foundation models,” that is generative neural networks trained on very large datasets to answer a broad range of user questions. This generality makes foundation models suitable for data exploration, as their broad knowledge makes them flexible and rather context independent. However, they have a very limited ability to handle data (especially quantitative data) and run analytics, in terms of bounded data size as well as low relevance and accuracy of results. The main innovation of GEN4GEO is to design a data exploration engine that exploits generative AI for natural language interaction but does not rely on foundation models to run the analytics. Intuitively, our idea is to ask generative AI to provide the software code that answers the user’s question and then run it with the GEN4GEO engine to obtain results. For example, if the user asks for the average level of rainfall in the Shan region in Myanmar, we ask generative AI to provide the code of the corresponding SQL query and then use the GEN4GEO engine to run the query on the weather database to obtain the actual result. The user question will be also used to ask generative AI to select the best visualisation of the result in the system dashboard (i.e. a density map rather than a plot). This removes the limitations on data size, improves the accuracy of results, and broadens the range of the analytics that can be executed and visualised by the system. In turn, this reduces the knowledge barriers of data exploration, enabling non-technical users to understand new data without or with limited help from technical users. It can make data exploration a faster and more creative process. To verify this assumption, we have performed a preliminary set of cross-industry interviews (2 system integrators, 1 expert of business intelligence, 1 company operating in the maritime transportation industry, 1 pharma, 1 agency specialised in geo-marketing, 1 data provider). Most interviews involved high-level decision makers and managers. We have noted a positive general interest in the idea. However, we have received a recurring comment related to the fact that running queries on a dataset in natural language is not enough, while for GEN4GEO to represent a truly innovative application queries should be accompanied with the ability to self-define high level concepts (such as “fragility”) as a function of available data and then use these high-level concepts in subsequent conversational interactions with the dashboard. This observation reinforces the idea that language models are not the solution per se, but a tool that should be embedded in a system with data aggregation and analytical capabilities, consistent with the idea of data exploration of GEN4GEO. The exploration of EO-nonEO fragility related data represents the ideal testbed for GEN4GEO. Exploring this type of data from a geographical standpoint can highlight areas and time patterns where the tangible effects of fragility are or will be most impactful. The literature on fragility explains how impactful fragility-related phenomena are related to the concentration of critical events in certain areas and their different change patterns over time. This geographical understanding of data has practical implications from an economic and political point of view, with cross-industry applications. An easy and interactive exploration of data can favour broader adoption and usage. The project has kicked off in October 2024, with a first release of the dashboard due in April 2025. Two design thinking sessions will be held to design the dashboard in December 2024 and January 2025, leading to a design of the main dashboard functionalities tailored to the needs of a broad cross-section of potential users. The dashboard will be demonstrated with a fragility dataset comprising relevant multi-dimensional indicators of fragility based on EO and complementary non EO data for 4 developing countries in the 2018-2024 period. The team will analyse the role played by the dashboard in highlighting important fragility patterns and their environmental, economic, social, and security impact. Acknowledgements – This research activity is carried out under the programme Open Call for Proposals for EO Innovation, Contract n. 4000145918/24/I-DT-bgh, and funded by the European Space Agency”.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Graph-Based Machine Learning Models and Earth Observation Data for Social Good

Authors: Seán Ó Héir, Beāte Desmitniece, Dr Gary Watmough, Dr Sohan Seth
Affiliations: University Of Edinburgh
Introduction: Effective socioeconomic measurement is crucial for informed decision-making across areas such as health programming [1], poverty mitigation [2], and urban planning [3]. In low- and middle-income countries (LMICs), measuring socio-economic and demographic changes can be challenging due to outdated, or insufficiently detailed data. Traditional survey methods, such as censuses, are costly, requiring significant time, money, and personnel, and therefore have low temporal resolution. This cost can be a barrier for LMICs, leading to less frequent surveys [4], sometimes with gaps of more than 15 years [5], which fail to reflect current realities and obscure socio-economic disparities [6]. Household surveys, such as Demographic and Health Survey (DHS), Multiple Indicator Cluster Survey (MICS) and those conducted by the country offices for a variety of purposes (income, agricultural ) provide measurements during the intercensal period that can be invaluable in assessing the changes in socio-economic indicators. The use of household survey data alongside Earth Observation (EO) offer opportunities for providing higher frequency estimations at a fine spatial resolution for a variety of socioeconomic outcomes such as poverty, [7], population [8], and health risks [6]. Nonetheless, these surveys are usually sparse both spatially and temporally requiring appropriate spatio-temporal computational tools to effectively process them. Graph-based ML: Machine learning (ML) has been increasingly used alongside EO to monitor socio-economic changes, e.g., in population estimation [9][10][8], poverty mapping [11], supporting COVID-19 responses in slum communities [12], and monitoring urban transformation processes [6]. One of the challenges of the existing ML-based methods is the incorporation of spatial context, i.e., information from the surrounding areas that can help us understand or predict certain characteristics within the area of interest. Several studies have illustrated how various factors such as the increase in manufacturing and service amenities [13], growth in the number of well-being facilities and housing density [14], and changes in land development [15] cause positive or negative population density changes in the surrounding areas of the examined region. Graphs offer a natural representation of the geographical data: by representing geographic units as nodes and spatial relationships as edges that connect the nodes, graph-based ML models, e.g., Graph Neural Networks (GNNs) can capture complex spatial dependencies, both short- and long-range, and aggregate information from neighbouring regions, enabling more accurate predictions of socioeconomic dynamics. GNNs allows capturing non-linear relationships between covariates and outcomes [16], using explainability tools (such as GNNExplainer [17]) to make decisions more transparent, integrating multiple data sources effectively, accumulating information at multiple resolutions [18] (e.g., admin levels), combining spatial and temporal information seamlessly [19], and using multiple types of relationships to define connections [20] (e.g., neighbouring geographical units or units connected by a certain road). These make GNNs well-suited for a geography-focused dataset, and GNNs have been applied in various studies such as predicting local culture of neighbourhoods [21], spatio-temporal land cover mapping [22] and road surface extraction from satellite imagery [23], and they are also excellent at relational learning, allowing modelling intricate interconnections between different regions [24]. Example: We explore the use of graph-based models in the context of social good, e.g., estimating population density for monitoring sustainable development goals in several sub-Saharan African countries using EO data. Graphs are constructed with nodes representing administrative units, and edges based on geographical adjacency and transportation linkage. Node attributes include geospatial features from Sentinel-2 land use data, Landsat data, nighttime light levels, building footprints, and road density from OpenStreetMap data. We assess the performance of our graph-based approach by comparing against baseline models; evaluating its’ ability to generalise to geographically distant areas by training/testing on province splits; determining the geospatial feature importance by employing permutation feature importance; and quantifying the prediction uncertainty. Conclusion: The use of graph-based machine learning models, particularly GNNs, offers significant advancements in understanding and predicting socioeconomic dynamics in LMICs. By leveraging high-resolution EO data and employing spatial relationships between admin levels, these models can potentially enhance the accuracy of socioeconomic measurements. Our exploration of population density estimation in sub-Saharan Africa, using diverse geospatial datasets, demonstrates the potential of GNNs to include spatial context into socioeconomic monitoring from space. Additionally, this approach provides valuable insights through explainability tools, paving the way for more informed decision-making in areas such as health, urban planning, and poverty mitigation. Although we have focused on the case of population esti- mation, the adaptability of a GNN approach suggests its applicability across various socioeconomic indicators, offering a flexible, data-driven tool for improved policy and planning in data-scarce environments. References: [1] Saman Khalatbari-Soltani et al. “Importance of collecting data on socioeconomic determinants from the early stage of the COVID-19 outbreak onwards”. In: J Epidemiol Community Health 74.8 (2020), pp. 620–623. [2] Imran Sharif Chaudhry, Shahnawaz Malik, et al. “The Impact of Socioeconomic and Demographic Variables on Poverty: A Village Study.” In: Lahore Journal of Economics 14.1 (2009). [3] Devis Tuia et al. “Socio-economic data analysis with scan statistics and self-organizing maps”. In: Computational Science and Its Applications–ICCSA 2008: International Conference, Perugia, Italy, June 30–July 3, 2008, Proceedings, Part I 8. Springer. 2008, pp. 52–64. [4] Deborah L Balk et al. “Determining global population distribution: methods, applications and data”. In: Advances in parasitology 62 (2006), pp. 119–156. [5] NA Wardrop et al. “Spatially disaggregated population estimates in the absence of national population and housing census data”. In: Proceedings of the National Academy of Sciences 115.14 (2018), pp. 3529–3537. [6] Paloma Merodio G´omez et al. “Earth observations and statistics: Unlocking sociodemographic knowledge through the power of satellite images”. In: Sustainability 13.22 (2021), p. 12640. [7] Gary Watmough and Charlotte LJ Marcinko. “EO for Poverty: Developing Metrics to Support Decision Making Using Earth Observation”. In: Comprehensive Remote Sensing: Volume 9 Remote Sensing Applications. Elsevier, 2024, pp. 1–22. [8] Isaac Neal et al. “Census-independent population estimation using representation learning”. In: Scientific Reports 12.1 (2022), p. 5185. [9] Caleb Robinson, Fred Hohman, and Bistra Dilkina. “A deep learning approach for population estimation from satellite imagery”. In: Proceedings of the 1st ACM SIGSPATIAL Workshop on Geospatial Humanities. 2017, pp. 47–54. [10] Wenjie Hu et al. “Mapping Missing Population in Rural India”. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM. Jan. 2019. [11] Jessica E Steele et al. “Mapping poverty using mobile phone and satellite data”. In: Journal of The Royal Society Interface 14.127 (2017), p. 20160690. [12] Patricia Lustosa Brito et al. “The spatial dimension of COVID-19: The potential of earth observation data in support of slum communities with evidence from Brazil”. In: ISPRS International Journal of Geo-Information 9.9 (2020), p. 557. [13] Diego Firmino Costa da Silva, J Paul Elhorst, and Raul da Mota Silveira Neto. “Urban and rural population growth in a spatial panel of municipalities”. In: Regional Studies 51.6 (2017), pp. 894–908. [14] Yisheng Peng et al. “The relationship between urban population density distribution and land use in Guangzhou, China: A spatial spillover perspective”. In: International Journal of Environmental Research and Public Health 18.22 (2021), p. 12160. [15] Qingmeng Tong and Feng Qiu. “Population growth and land development: Investigating the bi-directional interactions”. In: Ecological Economics 169 (2020), p. 106505. [16] Rongzhe Wei et al. “Understanding non-linearity in graph neural networks from the bayesian-inference perspective”. In: Advances in Neural Information Processing Systems 35 (2022), pp. 34024–34038. [17] Zhitao Ying et al. “Gnnexplainer: Generating explanations for graph neural networks”. In: Advances in neural information processing systems 32 (2019). [18] Luca Pasa, Nicol`o Navarin, and Alessandro Sperduti. “Multiresolution reservoir graph neural network”. In: IEEE Transactions on Neural Networks and Learning Systems 33.6 (2021), pp. 2642–2653. [19] Truong Son Hy et al. “Temporal multiresolution graph neural networks for epidemic prediction”. In: Workshop on Healthcare AI and COVID-19. PMLR. 2022, pp. 21–32. [20] Guohao Li et al. “Deepergcn: All you need to train deeper gcns”. In: arXiv preprint arXiv:2006.07739 (2020). [21] Thiago H Silva and Daniel Silver. “Using graph neural networks to predict local culture”. In: Environment and Planning B: Urban Analytics and City Science (2024), p. 23998083241262053. [22] Domen Kavran et al. “Graph neural network-based method of spatiotemporal land cover mapping using satellite imagery”. In: Sensors 23.14 (2023), p. 6648. [23] Jingjing Yan, Shunping Ji, and Yao Wei. “A combination of convolutional and graph neural networks for regularized road surface extraction”. In: IEEE transactions on geoscience and remote sensing 60 (2022), pp. 1–13. [24] Luana Ruiz, Fernando Gama, and Alejandro Ribeiro. “Graph neural networks: Architectures, stability, and transferability”. In: Proceedings of the IEEE 109.5 (2021), pp. 660–682.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EO-Driven Solutions for Energy Access in International Development: Bridging Gaps with ESA’s GDA Clean Energy Activity

Authors: Malin Sophie Fischer, Filipe Girbal Brandão
Affiliations: Vida.place Gmbh, GMV Innovating Solutions S.L.
Energy access is a catalyst for sustainable development, powering progress in economic growth, healthcare, and education. Despite its central role and the global commitment to Sustainable Development Goal 7 (SDG-7) - ensuring affordable, reliable, sustainable, and modern energy for all by 2030 - significant gaps remain. As of 2022, approximately 685 million people lack electricity, with four in five of them residing in Sub-Saharan Africa, predominantly in rural areas where electrification deficits have even grown over the past decade (IEA 2024). Extending national grids to these remote communities is often financially unsustainable, highlighting the need for decentralised, renewable-powered solutions like mini-grids and solar home systems. Yet, many of these communities are missing from existing public or even governmental maps, presenting a critical challenge: How can we efficiently and affordably identify and characterise underserved populations on a large scale as a foundation for electrification planning? Earth Observation (EO), with satellites monitoring our planet at an unprecedented rate and level of detail, in combination with geospatial technologies offer a transformative solution to bridging data gaps in energy access planning. By leveraging satellite data and advanced analytics, we can efficiently locate and characterise underserved communities, providing critical insights to guide sustainable electrification efforts from site selection to financing to implementation. This session will delve into how EO and geospatial tools are being applied in practice, including use cases developed under the European Space Agency’s (ESA) Global Development Assistance (GDA) Clean Energy activity. These examples, created in collaboration with international financing institutions across Subsaharan Africa and Asia, demonstrate how innovative geospatial solutions can support data-driven energy planning, enabling targeted and impactful interventions to accelerate progress toward universal energy access. The main use case to be presented showcases how advanced Earth Observation (EO) and geospatial analytics can generate detailed cropland and irrigation maps to support energy planning for rural communities. This approach drastically reduces the need for costly on-ground surveys by leveraging ESA’s free and open Sentinel satellite data. A pilot was conducted in collaboration with the World Bank’s Energy Sector Management Assistance Program (ESMAP), covering three diverse areas in Madagascar. There, energy access remains critically low: Two-thirds of the population, or approximately 18 million people, lack access to electricity. Using data from ESA’s Sentinel-2 satellites, high-resolution cropland maps were developed with sufficient detail to detect even smallholder farms. Sentinel-2’s 10-meter spatial resolution and frequent revisit times enable precise detection of fields, accounting for seasonal variations. Specifically, an image segmentation in combination with a Machine Learning algorithm (Random Forest) was trained on remotely collected data in combination with a multitude of metrics derived from a Sentinel-2 time series, ensuring an accurate classification without the need for physical field visits (overall accuracy: 91 %). The analysis extended to identifying irrigated fields, which are particularly relevant for energy planning as these areas can benefit from electricity-powered technologies like irrigation pumps. By combining a vegetation index derived from Sentinel-2 imagery with backscatter data based from Sentinel-1, which can penetrate clouds and measure soil moisture, detailed irrigation maps at ten meter resolution were produced with pixel- as well as object-based classifications. Due to a lack of on-ground data, the accuracy could only be assessed visually - a common constraint when working in data-scarce regions. The resulting spatial data products provide an up-to-date view of agricultural activities around rural settlements, which are often overlooked in traditional mapping efforts and electrification planning. They complement global products such as ESA’s WorldCereal and WorldCover maps as well as IFPRI's crop-specific yet low-resolution MapSPAM data, offering localised, actionable insights into farming practices. Ultimately, making sense of complex data products is crucial for decision-makers and practitioners, especially without a technical background. Accordingly, the created map products were connected to settlements, which serve as the fundamental units of electrification planning. To achieve this, a reliable base-map was developed, automatically identifying settlements of all sizes using a Machine Learning clustering algorithm (DBScan) to group buildings and define settlement boundaries in the areas of interest. The cropland and irrigated land around each settlement were then quantified to assess surrounding agricultural activity. By combining this with the number of buildings and regional crop type data from MapSPAM, decision-makers gain actionable insights into household energy demand and the potential for agriculture-related productive energy uses. To ensure accessibility and usability, these outputs have been integrated into the GDA Clean Energy Platform. This intuitive, map-based online tool presents all relevant data layers and provides detailed settlement profiles. Users can filter settlements by key criteria, such as size and surrounding cropland share, to identify areas of interest quickly. This enables stakeholders to evaluate productive energy uses and prioritise electrification efforts effectively, whether at the regional or settlement level. The flexibility and global applicability of this approach, supported by ESA’s Sentinel satellite imagery, make it a scalable solution for energy planning worldwide. Regular updates to crop maps and potential thematic extensions, such as crop type or seasonality analysis, further enhance the platform's utility, paving the way for precise, large-scale electrification strategies. In addition to the use case in Madagascar as presented above, selected examples of applying EO and geospatial analytics in electrification planning can briefly be presented. This includes other use cases from ESA’s GDA Clean Energy activity which are currently on-going including the main authors’ organisations, with relevant results expected until the LPS symposium. As examples, off-grid least-cost electrification planning on islands can be presented (use case in Micronesia) as well as grid extension planning in Papua New Guinea in collaboration with the Asian Development Bank. In both cases, buildings are to be detected from very-high-resolution satellite imagery obtained from ESA’s Third Party Missions, as a foundation for more advanced analyses including least-cost electrification and climate risk assessments. In summary, this session highlights the transformative role of Earth Observation (EO) and geospatial technologies in addressing energy access challenges through practical use cases from ESA’s GDA Clean Energy activity. The featured example from Madagascar in collaboration with the World Bank demonstrates how satellite data and machine learning generate detailed cropland and irrigation maps linked to rural settlements, providing actionable insights for energy planning and productive uses of energy. Potential additional examples, including off-grid electrification in Micronesia and grid extension planning in Papua New Guinea, can showcase the scalability and adaptability of these solutions. By leveraging ESA’s openly available satellite data, this session demonstrates not only the critical role these resources play in addressing energy access challenges but also their versatility in enabling diverse, impactful applications across varying contexts, showcasing the far-reaching potential of EO technologies for sustainable development.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Wastewater Treatment Plant Impact Assessment Based on Earth Observation Data in the Panama Bay

Authors: Ioan Daniel Serban, Sorin Constantin, Georgiana Anghelin, Marius Budileanu, Zoltan Bartalis
Affiliations: Terrasigna, European Space Agency
Juan Díaz Wastewater Treatment Plant (WWTP) is the main facility, located near Panama City (at the mouth of the River Juan Díaz), dedicated to cleaning up residual waters before they are released into the bay. The first module of the WWTP was completed in 2015, while the second one started its operations in August 2022. The main objective of the analysis was to assess and detect any changes in water quality parameters, using EO data, that might have been influenced by these investments. A comprehensive analysis on the impact on the physical, chemical and biological status of water bodies in the neighboring area within Panama Bay was performed. Given the objective, it was considered that the following indicators are of prime interest: chlorophyll-a concentration (Chla) and derived products (e.g. number of algal blooms), Sea Surface Temperature (SST), dissolved oxygen (DO), nutrients concentrations (nitrate and phosphate) and the fraction of organic and mineral particles. Multiple sources of data were used, from products available through the Copernicus Marine Service to satellite images collected by the Sentinel-3 mission. A long-period of time was considered, starting with 1993 until present, for several indicators, as to highlight the overall changes in the region. The main conclusions were drawn based on the analysis of anomalies from the climatological mean at monthly and daily time scales, for each parameter of interest. The results suggest a tendency of improvement in terms of water quality for the most recent years. However, this overlaps on a long-term trend of degradation that might have started now to be reversed thanks to the clean-up actions in the region. This work was performed within the framework of the GDA FFF (Global Development Assistance – Fast EO Co-Financing Facility), in partnership with the European Space Agency (ESA) and the European Investment Bank (EIB).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Upscaling the water use efficiency analyses - GDA Agriculture pilot case Indonesia

{tag_str}

Authors: Alen Berta, Viktor Porvaznik, Juan Suarez Beltran, Stefano Marra, Alessandro Marin
Affiliations: CGI Deutschland, CGI Italy, GMV
Agriculture, as the largest consumer of water worldwide, faces a critical challenge in improving irrigation efficiency to ensure food security and sustainable farming practices. Currently, more than 50% of ground and potable water is wasted due to inefficient irrigation systems, an issue exacerbated by the growing impacts of climate change, including more frequent and severe droughts. This inefficiency threatens food production and livelihoods for millions of people necessitating robust solutions to optimize water usage and enhance irrigation management. The GDA Agriculture project aims to tackle this issue by deploying the ESA Sen-ET algorithm, enriched with global EO based biomass products, and fully automated and integrated into the CGI Insula platform. This cloud-native platform integrates EO data, Geographic Information Systems (GIS), and advanced analytics to provide a cutting-edge solution for analyzing water use efficiency and daily evapotranspiration. Leveraging Sentinel-2 and Sentinel-3 data, along with other EO datasets, the project identifies problematic areas, evaluates irrigation system performance, and provides actionable insights to optimize water use. As such, this project supports Asia Development Bank in the related project of enhancing dryland farming systems in Indonesia, but it can be used globally as it does not rely on local data. Local data (crop areas/crop types) can be uploaded into the Insula platform for post-processing depending on the users need for granularity. The CGI Insula platform delivers significant benefits to end-users, including farmers, policymakers, and funding organizations. Firstly, it provides near-real-time monitoring and analysis of water usage efficiency, enabling farmers to make timely adjustments to their irrigation practices and mitigates the risk of water scarcity. The platform also supports the identification of areas that require additional irrigation or where existing systems are underperforming, allowing for targeted interventions and resource allocation. This targeted approach maximizes the effective use of water resources, improving agricultural productivity and fostering sustainability. By integrating EO data, GIS, and advanced analytics, the project provides a robust solution for optimizing water usage and improving agricultural productivity. The benefits for end-users are manifold, including near-real-time monitoring and targeted interventions. This operational implementation not only enhances food security and water sustainability but also supports the overall resilience and prosperity of agricultural communities.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Zoom In – A Cascading Solar Potential Approach

Authors: Elke Kraetzschmar, Kristin Fleischer
Affiliations: IABG mbH
Access to energy is identified as one of the basic needs towards a decent life with better chances (SDG 7). Whereas in Europe, main initiatives concentrate on implementing the European Green Deal and related national policies, providing access to energy is still ongoing in other parts of the world and often handled in a more pragmatic way. Access to continuous and reliable energy as guarantee for economic growth remains a challenge in many fast growing and rapidly densifying urban agglomerations (Africa, SE-Asia). Concerning a well-designed energy infrastructure, cities are less managed. Understanding the dissemination grid is crucial. Private initiatives and investors provide access to electricity, thus the urban fringe is often intermingled with off-grid energy units, such as mini-grids preferably run by diesel generators. Few houses have Photovoltaic (PV) systems installed on their rooftops. Air quality is critical, space is limited, and open suburban regions convert to densely populated places within a few years. In this common setting, International Financial Institutions (IFIs) engage in supporting the transition towards sustainable solutions to fulfil SDG 7, serving the ranging needs within urban agglomerations and in rural areas. EO data can act as the overarching element by providing a better understanding of regional patterns and the urban dynamics, and thus in tailoring the financial support accordingly. Focus is drawn to find best fitting, affordable and sustainable solutions, may this be solar rooftop solutions, hydropower, wind energy, or biogas. Within the Global Development Assistance Project on Clean Energy (GDA-CE), the team sketched multi-scale approaches and linked these to sites of ranging extent as demanded by the WB projects. Key questions to define the analysis approach are • What area size needs to be covered and what is the best fitting scale of analysis? • What is the purpose of the analysis & the respective user group and how high is the willingness to pay? • Which input and analysis data need to be considered? The analysis of the cardinal questions often reveals, that the well-know and very detailed roof-top based solar potential analysis can be over-the-top respectively is less purposeful, and even a waste of money. To avoid a mis-alignment in effort and suitability of results, the team introduced five scales of analysis serving different decision making levels and local user groups. 1) Global scale: low to medium-resolution global information layers for high-level decision makers. Usually used to understand the pan-continent situation or to conduct a country comparison based on long-term average data. A classic example is hereby the Global Solar Atlas. 2) National scale: Whereas national solutions in European countries follow bottom-up approaches due to a rich local data environment, developing countries mainly lack detailed information layers, forcing to set-up alternative top-down approaches for a first country-wide solar potential analysis. The advantage lies in a significant lower effort needed, both in terms of budget and man-power. The GDA-CE team developed and conducted a solar potential analysis on national level of Armenia. The high-resolution analysis benefited from ESA’s Sentinel-2 imagery providing a sufficient retrospective timeline and most recent terrain data. Yet, the level of detail of the results goes beyond the commonly used global solar atlas. 3) Regional scale: While still relying on open EO and geospatial data of high resolution, the number of input information layer for the analysis increases and thus does the thematic level of detail. Additionally, regional specificalities as well as user data are considered. The team is currently at the start to implement this approach, discussing most urgently needed regions in West Africa with the WB partner. 4) Local scale: Latest at local scale the solar potential analysis enters the VHR world, as the level of detail switches to building footprint level. The GDA team conducted a solar rooftop analysis based on VHR stereo data for Yerevan the capital of Armenia, emphasising on common challenges when working with spaceborne data. These proof sufficiency for the first stage of dimensioning potential investments. When characterising the urban structures regarding their suitability for roof-top installations itself, generic understanding of building orientation, types & sizes, distribution, and specific roof-top characteristics (obstacles, age, sub-rooftop level) is of interest. 5) Implementation scale: More detailed aerial flight planning is considered as far too costly and gets rather replaced by local drone flights once investment planning reaches engineering level (static). Sub-city or even building level analysis is linked to on-site visits. Here, the team already conducted detailed planning including engineering & construction specific skills in Germany. The presentation aims to provide a wrap-up of the reasonability of the multi-scale analysis within different project planning phases and user perspectives. The different approaches were showcased in multiple locations (cities and rural areas). It supports the engagement of the IFIs, being aware of limitations of regional scale vs. Benefits of receiving a most recent situational picture, linked to timeline and understanding the contextual options often surpasses this and build the base for a detailed trade-off analysis. Identifying the needed and most suitable scale of analysis is the foundation for the subsequent selection of data and analysis methodology. The choice directly implies a certain financial scheme necessary on IFI side (driven by data costs and high ration of manual work). Depending on the size of the location of interest, costs are prone to explode rapidly when choosing the supposedly best solution, while lacking knowledge of cost-efficient alternatives. The developed decision tree aims to guide the IFIs and user to tackle their need in the best fitting, and in cases pragmatic, manner.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Geospatial AI integrated with Space-based measurements to Model Future Wind Energy Potential

Authors: Gopal Erinjippurath, Michael Sierks, Tristan
Affiliations: Sust Global
Renewable energy investors worldwide are increasingly focused on quantifying the impacts of climate change on their wind energy generation assets. Accurate projections of future wind speed patterns are critical for improving the modeling of wind energy production during the prospecting of wind farm sites, financial planning for new project developments, and estimating future capital yields from operational wind energy generation sites. In this work, we present a method for training and inferring Geospatial AI models designed to provide high-resolution projections with reduced bias when forecasting future wind characteristics including wind speeds and wind project energy generation at any location on the globe. Our approach integrates data from ESA missions such as Aeolus-1 and Sentinel-2, along with ground-based wind speed characterization datasets, including the Copernicus Regional Reanalysis for Europe (CERRA), ECMWF ReAnalysis v5 (ERA5), and the NREL Wind Toolkit. Additionally, we leverage NASA's NEX-GDDP dataset, which consists of bias-corrected climate scenario projections derived from General Circulation Models (GCM) runs conducted under the Coupled Model Intercomparison Project Phase 6 (CMIP6). We detail our methodology for collecting and qualifying ground truth datasets, combining space-derived and environmental reanalysis datasets. Furthermore, we explore various Geospatial AI model architectures that enable flexible learning representations of land surface influences on wind speed. We benchmark performance for regional generalizability, inland, coastal and off shore locations and characterize performance against in situ measurements of wind characteristics at wind energy generation sites. We evaluate scenarios such as tropical and extratropical cyclones which limit the statistical performance of such models and present a novel approach to quantify uncertainty in predictive performance under such acute physical hazard genesis scenarios. Finally, we demonstrate example workflows where these Geospatial AI models are deployed in commercial contracting and institutional investor settings. These workflow allow renewable energy investors and project operators to assess the impacts of climate change on wind characteristics and wind energy capacity planning. We showcase how renewable energy finance teams can better adapt current and new generation capacity with future energy demands through representative examples from our commercial engagements in the UK, Europe and US.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact evaluation of irrigation schemes in Africa using Earth observation data

Authors: Oliver Mundy, Dr. Athur Mabiso, Dr. Emanuele Zucchini, Dr. Cristina Chiarella, Yu Dong, Rakhat
Affiliations: Ifad
The provision of irrigation is a crucial adaptation strategy against climate change, particularly in light of the variability in seasonal patterns and rainfall. In many parts of Africa, it is a vital component in ensuring food security. This session will examine a range of irrigation schemes funded by the International Fund for Agricultural Development (IFAD), including small drip irrigation schemes in Cabo Verde, medium-sized and large schemes in Ethiopia and Madagascar. The utilisation of Earth observation (EO) data for the monitoring of irrigation schemes, which are frequently situated in remote rural regions with inadequate road networks, can facilitate the acquisition of profound insights for international finance institutions such as IFAD. However, this approach is not a standard component in the impact evaluation of rural development programmes. This session proposes the implementation of a comprehensive approach to the collection of geo-referenced field data in regular project monitoring and evaluation. This approach should be particularly focused on mapping water sources, pipes and canals pathways, as well as the delineation of command areas and irrigated areas. The session will present a range of approaches and metrics for the monitoring and evaluation of different types of irrigation schemes, with a view to detecting the full extent of change and estimating the level of change using a range of EO-based output variables, including land cover land use maps, crop maps, vegetation and water indices (e.g. EVI, NDVI and NDWI). Given the variability in seasons, crops, and irrigation systems, a range of approaches and EO indicators are required. This session presents a change matrix for different types of irrigation schemes. Based on anticipated behavioural changes among farmers, the potential changes that could be detected from space are described, and the most suitable EO variables for measuring these changes are selected. Furthermore, the approaches delineate methodologies for defining time series analysis (before-after analysis on the same plot) and for making comparisons with a control area.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Revolutionizing Country Performance Assessment: Integrating EO/OSINT Data in a Machine Learning Model for fragility assessment

Authors: Valerio Botteghelli, Annalaura Di Federico, Adriano Benedetti Michelangeli, Chiara Francalanci, Annekatrin Metz-Marconcini, Alix Leboulanger, Anne-Lynn Dudenhoefer, Koen Van Rossum, Koen De Vos
Affiliations: e-GEOS, Cherrydata, DLR, Janes, Hensoldt Analytics, Vito
One of the outcomes of the GDA Fragility, Conflict and Security initiative was the design and implementation of a methodology introducing Earth Observation and Open-Source derived information to complement statistical based methodology to assess fragility contexts, implemented in cooperation with International Financing Institutions. The GDA Fragility initiative designed, implemented and tested a proof of concept to enhance the understanding of a country’s fragility through the development of innovative indicators, to contribute to a better knowledge and understanding of the cohesion and convergence of drivers of fragility and resilience and the identification of their roles. The completed activity included collection of data and analysis in over 12 developing member countries (DMCs) that the ADB categorized as Group A and B , covering an observation period from 2017 to 2022. To assess country maturity, 108 indicators were categorized into economic, social, and political dimensions. These indicators were then pre-processed, ingested, cleaned, normalized, and rescaled to ensure homogeneity and comparability. Using k-means machine learning method, countries were clustered into two groups based on similar indicator values and trends, both separately for each dimension and collectively. An aggregate country performance indicator was developed, akin to the traditional composite country performance rating, by assigning weights to maximize correlation with the traditional index. The resulting correlation was highly accurate, suggesting that the newly introduced geospatial indicators can provide early signals for decision-making in international financial institutions and on official development assistance. Over the next months, the team will work on the completeness of our set of indicators and will define a more general process for selecting and weighing indicators. In-depth case study analyses will be conducted on selected countries, with the goal of taking full advantage of the greater geographical and temporal granularity of quantitative indicators (both EO and non EO).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EO supporting strategic planning of industrial-scale biogas and bio-methane production

Authors: Jonas Franke, Levente Papp, Kristin Fleischer, Elke Krätzschmar
Affiliations: Remote Sensing Solutions (RSS), Industrieanlagen-Betriebsgesellschaft mbH (IABG)
This abstract explores the partnership between the European Space Agency’s (ESA) Global Development Assistance (GDA) program and the World Bank, which leverages Earth Observation (EO) technologies to unlock the potential for biogas and biomethane production in Bangladesh. The country is advancing its clean energy ambitions under the Paris Agreement with a strong focus on reducing greenhouse gas emissions and scaling up biogas production. Despite the country's substantial feedstock resources, its biogas sector remains underdeveloped. Through satellite data integration, the collaboration optimizes feedstock sourcing for rural biogas production and identifies methane emission hotspots from landfills for assessing the potential of gas recovery systems. In regard to assessing potentials for biomethane from landfills, time series of Sentinel-5 satellite data, spanning five years, has been instrumental in identifying areas with recurring methane emissions. Within the identified hotspot areas, high-resolution GHGSat data were used to confirm and quantify methane emissions at a local level, supporting the development of projects on methane recovery from landfills. Another key innovation is the use of EO data to guide the scaling of biogas production from agriwaste. By combining satellite imagery with land use, climate, and socio-economic data, this approach enables precise spatial modelling of feedstock sourcing. The ability to map areas where feedstock can be sourced sustainably minimises land-use competition and negative ecological impacts. The spatial modelling also optimises biogas production by prioritizing feedstock sourcing in proximity to energy demand, transport networks, and existing gas infrastructure. This ensures that biogas production is not only economically viable but also aligned with local energy needs, reducing costs and emissions associated with feedstock transport. This approach guided strategic planning for industrial-scale biogas and biomethane production at country scale. This is not only addressing Bangladesh’s energy needs but also contributing to global methane mitigation efforts, aligning with the Global Methane Pledge. By utilizing EO data for precise spatial modeling and assessing the economic viability of biogas-to-biomethane production, this initiative contributes to reduce Bangladesh’s reliance on imported natural gas, supporting a sustainable, renewable energy market. This innovative approach provides a model for other developing nations striving for both economic growth and environmental sustainability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Democratizing High-Resolution Earth Observation: Multi-Image Super-Resolution for Development Applications in Urban Asia

Authors: Sebastian Mueller, Prof Konrad Schindler, Dr Yohan Iddawela, Mr. Aditya Retnanto, Mr. Son Le
Affiliations: Asian Development Bank, ETH Zurich
High-resolution satellite imagery is essential for environmental monitoring, urban planning, and disaster management. However, the high costs of acquiring such data limit its accessibility, especially for applications requiring extensive time-series analysis. Governments and researchers often need months or years of data to track trends, compounding costs to unsustainable levels. To address these challenges, the Asian Development Bank (ADB) proposes to develop an open-source deep learning model to upscale freely available Sentinel-2 imagery to high-resolution equivalents, offering an affordable and scalable solution for Earth Observation (EO) applications. This project introduces an innovative multi-image super-resolution framework to enhance Sentinel-2 imagery from its native 10m resolution to approximately 5m and 2.5m for the red, green, blue, and near-infrared (NIR) spectral bands. Leveraging deep-learning models, this approach lowers the barriers to accessing high-resolution EO data while enabling critical time-series analysis across large areas. This project delivers key innovations in advancing multi-image super-resolution, addressing urban challenges in Asia and evaluating impacts on downstream applications. 1. Deep-learning for multi-image super-resolution: While deep learning has significantly advanced single-image super-resolution, multi-image super-resolution has traditionally relied on conventional image fusion techniques. This project advances the emerging field of deep learning for multi-image super-resolution. By leveraging the additional information from multiple observations of the same scene and harnessing the enhanced capacity of deep-learning models, this approach aims to achieve improved performance in super-resolution applications. 2. Application in Urban Asia: Rapid urbanisation in Asia has heightened the need for high-resolution EO data to monitor land use, urban sprawl, and environmental degradation. Many countries cannot afford very high-resolution (VHR) data, and existing super-resolution models have not been designed or validated for the needs of Asian cities. This project addresses both gaps. 3. Impact on Downstream Applications: The project evaluates the utility of super-resolution imagery for downstream tasks, focusing on land-use and land-cover (LULC) classification. Automated LULC classification results using super-resolved images will be compared against Sentinel-2 and VHR imagery (~1–2 meters GSD). 4. Benchmarking and Reproducibility: The project benchmarks multiple super-resolution AI models on a standardised test set, ensuring robust and reproducible results. It provides meaningful comparisons of different approaches for urban applications. The methodology involves collecting and preprocessing satellite imagery, training and validating super-resolution models, and applying the outputs to land-use classification in Hanoi, Vietnam. 1. Data Collection and Preprocessing: Very high-resolution (VHR) imagery from multiple sensors (Worldview, Spot, Pleiades NEO) is co-registered with multiple Sentinel-2 revisits. The Data is preprocessed to ensure radiometric and geometric consistency. 2. Models: AI models for multi-image super-resolution, including ResNets, GAN-based approaches (e.g. SRGANs, ESRGANs), and Transformer-based architectures, are evaluated for their ability to generate high-resolution outputs. 3. Validation Framework: Super-resolved outputs are validated against VHR reference data using metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and mean squared error (MSE). 4. LULC Classification: Human experts provide LULC labels for Hanoi. A classification model trained on super-resolved, Sentinel-2, and VHR images will assess the impact of super-resolution on classification accuracy. AI-powered super-resolution bridges the gap between the growing demand for precise EO data and the high cost of VHR imagery. By enhancing freely available Sentinel-2 imagery, this project provides a cost-effective solution that democratises access to high-resolution EO data. It enables applications such as creating granular land-use maps and detecting changes in building outlines, particularly in regions with limited access to high-resolution imagery. This work also addresses a critical gap in the literature by evaluating the performance of deep-learning-based super-resolution in urban settings and its impact on downstream tasks. By openly sharing datasets and methodologies, the project fosters international collaboration and enables the global research community to advance EO applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: GPI – Grassland Production Index

Authors: Space Product Marketing Manager Robin Expert
Affiliations: Airbus
The Grassland Production Index (GPI), developed by Airbus, is an innovative service designed specifically for the European agricultural insurance sector. This satellite-based solution enables insurers to create precise insurance products that protect cattle breeders against economic losses caused by drought. Month after month, the indicator is compared to the historical average, the maximum and minimum record years in grassland production, at a local (yet not individual) scale. When the index drops below an agreed threshold, the insurers compensate all insured breeders in the impacted local province, (within the conditions defined in their respective contracts), without any paperwork or on-site expert inspection. Using satellite imagery from Modis & Sentinel-3 satellites, part of the European Space Agency's Copernicus program, the GPI allows insurance companies to accurately assess the impact of climatic conditions on vegetation & calculate compensation based on scientific data. Unlike traditional agricultural damage assessment methods, this index offers a transparent and data-driven approach. It enables insurers to provide fairer and faster compensation contracts, relying on precise satellite measurements rather than subjective estimations. The primary goal is to secure European farmers' income against climate-related risks while allowing insurers to manage their risks more efficiently and scientifically.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Analyzing Gender Dynamics for Monitoring of Artisanal Mining Activities Using Remote Sensing in Ghana’s Ashanti Region

Authors: Diana West, Edwina Anderson, Lindsey Bonsu, Bashara Abubakari, Foster Mensah, Edith Kwablah, Jacob Abramowitz
Affiliations: NASA SERVIR and University of Alabama in Huntsville, CERSGIS, University of Twente
Artisanal and small-scale gold mining (ASGM) accounts for approximately 35% of Ghana’s total gold output, employing over one million people directly. Despite this significant contribution, ASGM perpetuates stark gender inequalities. Women, who represent over 40% of the ASGM workforce in the Ashanti Region, are predominantly confined to low-paying, labor-intensive roles such as ore hauling and gold panning, earning 50–70% less than men, who dominate higher-value mining and processing activities. The SERVIR program is a partnership between NASA and the US Agency of International Development, with geospatial services for climate adaptation implemented through local Hub partners. SERVIR West Africa has an active Monitoring of Artisanal Mining (Galamsey) service based out of the Centre for Remote Sensing and Geographic Information Services (CERSGIS) in Ghana, which offers a geospatial platform designed to track ASGM activities. This study captures the efforts of the service team to integrate the gender perspective into the geospatial service through an extensive analysis of the gender dynamics surrounding ASGM. The Gender Analysis conducted for the Monitoring of Artisanal Gold Mining (Galamsey) service combined insights from geospatial analyses, interviews, and surveys conducted with 300 respondents across 10 ASGM communities to uncover the socio-economic and environmental dimensions of gender disparities in the sector. It reveals significant structural barriers to resource control: only 18% of women miners have access to land ownership or legal mining licenses, compared to 67% of men. Geospatial monitoring highlights severe environmental degradation in these areas, including deforestation and mercury contamination, which disproportionately affect women tasked with securing water and food for households. Health impacts are stark, with over 60% of women reporting issues like respiratory conditions and reproductive challenges, exacerbated by inadequate access to occupational health services. To address these inequities, we propose a transformative, gender-responsive framework that leverages geospatial tools for monitoring and resource allocation. Coupled with community-led and capacity-building initiatives, this approach aims to enhance women’s representation in decision-making and promote equitable, sustainable ASGM practices. By foregrounding the intersection of gender, geospatial technology, and environmental sustainability, this research offers actionable insights for policymakers, practitioners, and researchers committed to driving inclusive development in resource-dependent economies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring Carbon Stocks Using Satellite Data: Global and Local Approaches

Authors: Francesco Amato, Valerio Pisacane, Renato Aurigemma, Jose Antonio Lopez Ferreiro, Fabiana Ravellino, Giovanni Giacco, Mauro Manente, Marco Focone
Affiliations: University Of Naples "Federico II", Euro.soft srl, Earth Sensing srl, Latitudo 40 srl
Carbon markets, which are becoming increasingly important in the global fight against climate change, require the development of reliable and transparent monitoring mechanisms. However, despite the growing significance of this market, several challenges remain, such as the difficulty in accurately measuring the amount of carbon sequestered, particularly in remote or hard-to-reach areas. Local surveys are costly and logistically complex, and there is no fully automated system for calculating carbon credits, limiting the ability to scale solutions globally. Furthermore, the lack of transparency between credit buyers and sequestration sources, along with the absence of secure mechanisms to prevent double counting of credits, exacerbates the problem, especially given the unregulated nature of these intangible assets. This work highlights the potential of employing Earth Observation data for the Monitoring, Reporting & Verification of carbon projects within a carbon marketplace. Two effective methodologies have been proposed to estimate carbon stocks, crucial indicators of a vegetation ecosystem's carbon sequestration capacity, initially estimated as above-ground biomass (AGB) and then converted into carbon stocks using empirical rules. The first method, "ReUse: REgressive Unet for Carbon Storage Estimation," employs deep learning to estimate global carbon sequestered by greenery. By utilizing biomass (AGB) data from the European Space Agency's Climate Change Initiative Biomass project, along with a time series of Sentinel-2 images, the model predicts carbon sequestration for each pixel through a regressive U-Net network. Incorporating Sentinel-1 satellite radar images and Digital Elevation Models enhances the model, enabling a more precise estimation of global carbon stocks. This tool offers quick estimates even in challenging conditions, such as after fires or hard-to-reach areas. The second method, "Forest Carbon Stock Estimation Using Machine Learning Ensembles: Active Sampling Strategies for Model Transfer," focuses on localized regions rather than providing global estimates. This approach employs active sampling and satellite imagery to identify the most relevant data points for these specific cases. Using Shannon’s entropy for sample selection, it innovatively transfers a calibrated regression model across different areas through an active-learning approach, starting with calibration in a reference region. Various sampling methods and regression strategies have been tested to reduce fieldwork while ensuring the accuracy of the estimates. This leads to a smaller set of data points for collecting new ground truth information, thereby minimizing the need for physical measurements. Experimental results demonstrate that combining regression ensembles with active learning significantly reduces field sampling, while still producing carbon stock estimates comparable to conventional methods. Together, the two approaches offer complementary solutions for carbon stock estimation: a global method for remote or rapidly changing areas, and a more focused, localized method that minimizes field sampling. Lastly, the concept of a carbon marketplace, AICarbonHub, is introduced. This marketplace addresses the compensation needs of businesses and individuals, while also supporting property owners in securing funds for the upkeep and enhancement of green spaces. By integrating the methodologies described above, the marketplace would enable continuous monitoring and verification of carbon storage, ensuring the credibility and accuracy of the carbon credits being traded.
LPS Website link: Monitoring Carbon Stocks Using Satellite Data: Global and Local Approaches&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: B.03.06 - POSTER - Climate, Environment, and Human Health

It is well-known that many communicable and non-communicable diseases have a seasonal component. For example, flu and the common cold tend to increase in autumn and winter whilst vector borne diseases like Dengue and West Nile Virus tend to peak in late summer when the vectors are at their most abundant. Under monsoon regimes, many diseases peak during the rainy season. Hay fever, spring-time allergies and other respiratory disorders also have seasonality related to the abundance of pollens and other allergens in the air. Environmental conditions in water, air and land have a role in regulating the variability in the presence or absence and abundance of pathogenic organisms or material in the environment, as well as the agents of disease transmission like mosquitoes or birds. For example, air temperature and relative humidity are linked to flu outbreaks. Water quality in coastal and inland water bodies impact outbreaks of many water-borne diseases, such as cholera and other diarrheal diseases, associated with pathogenic bacteria that occur in water. The seasonality has inter-annual variabilities superimposed on it that are difficult to predict. Furthermore, in the event of natural disasters such as floods or droughts, there are often dramatic increases in environmentally-linked diseases, related to break down of infrastructure and sanitation conditions.

Climate change has exacerbated issues related to human health, with the shifting patterns in environmental conditions, and changes in the frequency and magnitude of extreme events, such as marine heat waves and flooding, and impacts on water quality. Such changes have also led to the geographic shifts of vector-borne diseases as vectors move into areas that become more suitable for them, as they become less cool, or retract from those that become too hot in the summer. The length of the seasons during which diseases may occur can also change as winters become shorter. There are growing reports on the incidence of tropical diseases from higher latitudes as environmental conditions become favourable for the survival and growth of pathogenic organisms.

Climate science has long recognised the need for monitoring Essential Climate Variables (ECVs) in a consistent and sustained manner at the global scale and with high spatial and temporal resolution. Earth observation via satellites has an important role to play in creating long-term time series of satellite-based ECVs over land, ocean, atmosphere and the cryosphere, as demonstrated, for example, through the Climate Change Initiative of the European Space Agency. However, the applications of satellite data for investigating shifting patterns in environmentally-related diseases remain under-exploited. This session is open to contributions on all aspects of investigation into the links between climate and human health, including but not limited to, trends in changing patterns of disease outbreaks associated with climate change; use of artificial intelligence and big data to understand disease outbreaks and spreading; integration of satellite data with epidemiological data to understand disease patterns and outbreaks; and models for predicting and mapping health risks.

This session will also address critical research gaps in the use of Earth Observation (EO) data to study health impacts, recognizing the importance of integrating diverse data sources, ensuring equitable representation of various populations, expanding geographic scope, improving air pollution monitoring, and understanding gaps in healthcare delivery. By addressing these gaps, we aim to enhance the utility of EO data in promoting health equity and improving health outcomes globally.

The United Nations (UN) defines Climate Change as the long-term shift in average in temperatures and weather patterns caused by natural and anthropogenic processes. Since the 1800s, human emissions and activities have been the main causes of climate change, mainly due to the release of carbon dioxide and other greenhouse gases into the atmosphere. The United Nations Framework Convention on Climate Change (UNFCCC) is leading international efforts to combat climate change and limit global warming to well below 2 degrees Celsius above pre-industrial levels (1850–1900), as set out in the Paris Agreement. To achieve this objective and to make decisions on climate change mitigation and adaptation, the UNFCCC requires systematic observations of the climate system.

The Intergovernmental Panel on Climate Change (IPCC) was established by the United Nations Environment Programme (UNEP) and the World Meteorological Organization (WMO) in 1988 to provide an objective source of scientific information about climate change. The Synthesis Report, the last document part of the sixth Assessment Report (AR6) by IPCC, released in early 2023, stated that human activities have unequivocally caused global warming, with global surface temperature reaching 1.1°C above pre-industrial levels in 2011–2020. Additionally, AR6 described Earth Observation (EO) satellite measurements techniques as relevant Earth system observation sources for climate assessments since they now provide long time series of climate records. Monitoring climate from space is a powerful role from EO satellites since they collect global, time-series information on important climate components. Essential Climate Variables (ECV) are key parameters that explain the Earth’s climate state. The measurement of ECVs provide empirical evidence in the evolution of climate; therefore, they can be used to guide mitigation and adaptation measures, to assess risks and enable attribution of climate events to underlying causes.

An example of an immediate and direct impact of climate change is on human exposure to high outdoor temperatures, which is associated with morbidity and an increased risk of premature death. World Health Organisation (WHO) reports that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from malnutrition, malaria, diarrhoea and heat stress alone. WHO data also show that almost all of the global population (99%) breathe air that exceeds WHO guideline limits. Air quality is closely linked to the earth’s climate and ecosystems globally; therefore, if no adaptation occurs, climate change and air pollution combined will exacerbate the health burden at a higher speed in the coming decades.
Therefore, this LPS25 session will include presentations that can demonstrate how EO satellites insights can support current climate actions and guide the design of climate adaptation and mitigation policies to protect and ensure the health of people, animals, and ecosystem on Earth (e.g., WHO’s One Health approach).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Investigating Vectors of Water-Associated Diseases Linked to Water Hyacinth in Vembanad Lake

Authors: Jasmin Chekidhenkuzhiyik, P Fathimathul Henna, P J Neelima, Rithin Raj, Emma Sullivan, Dr Anas Abdulaziz, Dr Nandini Menon, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, Trevor Platt Science Foundation, Earth Observation Science and Applications, Plymouth Marine Laboratory, CSIR-National Institute of Oceanography, National Centre for Earth Observation, Plymouth Marine Laboratory
Water hyacinth (Eichhornia crassipes) usually found in freshwater bodies, remains an unresolved challenge to many countries around the world, as it affects human activities as well as health. The invasive hydrophyte is widespread throughout Vembanad Lake, a backwater body in the south west coast of India, and its connected canal systems. The proliferation of this weed fluctuates dynamically in response to variations in salinity levels. During the monsoon season, when the entire lake is freshwater-dominated, these hydrophytes envelop the lake’s surface, whereas during the dry season, the hydrophytes in saline water-dominated areas decay and sink to the bottom. In both cases, the water quality of the lake is affected, with consequences for the ecosystem health as well as human health. The thick floating weed mats obstruct water flow, hamper fishing activity and stagnate the water. Reduced water flow would promote sedimentation, deoxygenation and water quality deterioration, and reduce sunlight penetration. This creates a favourable habitat for the proliferation of disease vectors such as mosquitoes and snails, promoting diseases such as schistosomiasis, dengue, chikungunya, and malaria. Our investigation in the Vembanad Lake showed the presence of larval forms of various vectors in the roots of Eichhornia species collected from different canal stations connected to Vembanad Lake. Mosquito larvae were found at all stations, with varying abundances. Molecular sequencing identified these larvae as Mansonia indiana, a zoophilic mosquito that serves as a vector for the filarial nematode Brugia malayi. Other organisms found within the root network included juveniles of freshwater snails, water bugs, diving beetles, midges, and water spiders. Among the snail species, Indoplanorbis exustus and Gyraulus sp. are known to serve as intermediate hosts for trematode parasites such as Echinostoma and Schistosoma, which cause diseases such as Schistosomiasis in humans and animals. There have been reports on outbreaks of Schistosomiasis in the districts that border the freshwater regime of Vembanad Lake, heavily infested with water hyacinth. Modern remote sensing technologies can greatly enhance our capacity to understand, monitor, and estimate water hyacinth infestation within inland as well as coastal freshwater bodies. This study should be continued to investigate potential connections to human health.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Does Industrial pollution Drive Antimicrobial Resistance-Results from A Metagenomic study in Asia’s largest Pharmaceutical Hub

Authors: Inderjit Singh, Rashmi Sharma, Amit Arora, Balvinder Mohan, Neelam Taneja
Affiliations: Postgraduate Institute of Medical Education and Research, Chandigarh
Background Sewage constitutes a diverse group of bacteria, including human gut pathogens shed in the feces. In industrial cities it includes outflow of antibiotics from pharmaceutical producers and communities that act as a huge reservoir of antibiotic-resistance genes (ARGs). This intricate environment provides a great opportunity for pathogens to acquire new or exchange genes for their benefit. Understanding the emergence, evolution, and transmission of individual ARGs is essential to develop sustainable strategies to combat AMR. We carried resistome and microbiome analysis from sewage and soil samples from the Asia’s largest pharmaceutical hub of Asia, Baddi, Himachal Pradesh, India. Methodology We carried out intensive mapping of Baddi, marking important points around river Sirsa, community sewage, pharmaceutical/Industrial effluents and hospitals. The sewage and soil samples were collected. Samples were processed for microbiological isolation of ESBLs and Carbapenem resistant Organisms (CRO). DNA was isolated, shotgun metagenomic was done. Raw reads were checked for quality and assembled using IDAB-UD. Reads were investigated for the presence of AMR genes using MegaRes database, and microbial diversity using KrakenUniq. R studio packages was used for plotting and statistics. Results We observed higher CROs in industrial and hospital sewage as compared to community sewage. Also found higher number of ARGs signals in industrial wastewater signifying selection pressure. Ironically, ARGs were higher in community sewage as compared to hospital sewage indicating contamination with industrial wastewater. Hits against metallic compounds were exclusively high in Industrial effluents. Number of hits for antimicrobial drugs significantly increased in soil industrial sludge and river sediment samples taken from locations with high anthropogenic activities. A high level of aminoglycosides, beta-lactams, and tetracycline resistance was observed. Higher prevalence of ARGs in water samples indicated the dissemination of antibiotic resistance genes in water bodies and untreated wastewater. Industrial sludge was becoming part of agricultural soil. Conclusions A multipronged approach is required to mitigate effects of Industrial pollution. Better sewage treatment practices are needed to reduce the microbial load to reduce AMR Transmission.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Shoreline Dynamics and trends along the kerala coast, India: Observations from multi-temporal satellite data

Authors: Dr Ranith Rajamohanan Pillai, Ms. Swathy Krishna. M. C, Dr Bjorn Nyberg, Dr Nandini Menon. M, Dr Roshin P. Raj
Affiliations: Nansen Environmental Research Centre (India), 7 analytics, Nansen Environmental and Remote Sensing Center, Bjerknes Center for Climate Research
Climate change and associated extreme events have affected the coastline of India, resulting in the loss of coastal habitats. Detailed information on the rate of change of coastline is important in identifying the intensity of loss and planning mitigation measures. This study assessed the variability in the shoreline along the Kerala coast, situated in the South-west coast of India, for the period 2015 to 2023. The shoreline datasets for the study were obtained from Landsat 5,7,8 and Sentinel 2. A normalized difference water index served to delineate land and water bodies, as a first step in digitizing the shorelines for the Kerala coast. The analysis employed the Digital Shoreline Analysis System (DSAS) integrated with ArcGIS 10.2 to calculate shoreline dynamics using five distinct metrics: Shoreline Change Envelope (SCE), Net Shoreline Movement (NSM), End Point Rate (EPR), Linear Regression Rate (LRR), and Weighted Linear Regression (WLR). A total of 5,819 transects were analyzed to quantify the spatial and temporal variability in shoreline dynamics. Results shown that about 61% of the transects exhibited negative movement, indicating erosion. The maximum erosion is measured at -827.83 meters, while the highest accretion is measured at 437.18 meters. Based on SCE, the average shoreline change along the Kerala coast was 26.04 meters. The total distance of shoreline movement represented by NSM indicated significant erosion across the region. The average annual rate of shoreline movement between the oldest and most recent shorelines according to EPR showed that erosion is widespread, at an average of -1.06 meters/year. About 61% of the transects exhibited erosional trends, with maximum erosion rate of 101.4 meters/year and an accretion rate of 53.55 meters/year. LRR, derived from linear regression applied to all shoreline positions over time, also gave the same figures and patterns of erosion. The highest erosional rate recorded was -95.83 meters/year, while the highest accretion rate was 49.05 meters/year. Similarly, the WLR method, which applies weighted regression to account for uncertainty in shoreline positions, showed that about 61% of transects were eroding at an average rate of -1.06 meters/year. The peak erosion rate was -95.23 meters/year, and the maximum accretion rate was 48.56 meters/year. It is hence evident from this study that the Kerala shoreline is predominantly erosional, with significant shoreline retreat observed along the Kannur, Ernakulam and Trivandrum districts, owing to the frequent heavy rainfall, hydrodynamics and coastal anthropogenic activities. However, accretion was also identified at localized levels. The uncertainty in the rate calculations, evaluated at a 90% confidence interval, ranged between 0.47 and 2.51 meters/year, indicating the need for further refined analyses. This study serves as an interim assessment of the shoreline variability along the Kerala coast and provides critical insights into the spatial extent of coastal changes. Incorporation of high-resolution data, extending the temporal scale in the analysis could enhance the precision of these findings and better inform coastal management strategies to support sustainable development along the Kerala coastline.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Zanzemap Project: Artificial Intelligence Models and Satellite Data to Forecast Vector Dynamics in Northern Italy

Authors: Giovanni Marini, Daniele Da Re, Francesca Dagostin, Marharyta Blaha, Annapaola Rizzoli
Affiliations: Fondazione Edmund Mach, University of Trento
The project "ZanZeMap" aims to enhance public health in the Autonomous Province of Trento (Northern Italy) by developing user-friendly maps that indicate the risk of tick and mosquito presence and activity, addressing significant public health challenges posed by vector-borne diseases. Utilizing advanced artificial intelligence (AI) and machine learning techniques, this initiative analyzes detailed climatic and environmental data to predict where and when these arthropods are most active. Key to this project is the integration of high-resolution climate data, including satellite observations, providing insights into temperature, humidity, and vegetation cover—critical factors for understanding vector habitats and behaviors. The project can forecast changes in mosquito and tick populations up to two weeks in advance under various climate scenarios, allowing for proactive vector management. Additionally, field-based vector monitoring will be incorporated to validate the model’s forecasts, enhancing the accuracy of vector activity assessments and enabling timely interventions. The resulting online maps will empower the local population and stakeholders by providing real-time information on vector phenology and activity, facilitating personal protective measures against bites such as using repellents and fostering a collaborative environment in public health initiatives. Ultimately, this project not only aims to improve local vector surveillance but also has the potential for application in diverse geographical contexts facing similar public health challenges exacerbated by climate change. By establishing a robust framework for ongoing data analysis and community involvement, the initiative seeks to enhance public health outcomes and quality of life in the Autonomous Province of Trento and the Alpine area in the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Vibrio-phytoplankton relationships in Vembanad Lake and their potential use in Earth observation

Authors: Kiran Krishna, Dr Anas Abdulaziz, S Sangeetha, Shard Chander, Ashwin Gujrati, Dr Nandini Menon, Grinson George, Dr Shubha Sathyendranath
Affiliations: CSIR-National Institute of Oceanography, Academy of Scientific and Industrial Research (AcSIR), Space Applications Centre (ISRO), Nansen Environmental Research Centre India, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory
Diseases associated with Vibrio species are a growing global concern, particularly in coastal regions where the incidence of such diseases is escalating due to various factors related to climate change and other anthropogenic influences. There are ongoing efforts to employ Earth observation tools to develop risk maps of microbially-contaminated coastal areas, which necessitate the identification of reliable proxies for the presence and abundance of Vibrio species in the water. Vibrio species are found in association with diverse organisms, among which phytoplankton warrant further investigation due to their potential role as proxies detectable via satellites. This study seeks to assess statistically the relationship between the distribution of autotrophic picoplankton and Vibrio species in Vembanad Lake, situated along the southwest coast of India. The environmental variability in this region is primarily influenced by rainfall during the monsoon season (June to September), which often results in flash floods. Water samples were collected from 13 stations along the lake during three seasons: pre-monsoon (March), monsoon (June), and post-monsoon (December). These samples were analysed for total chlorophyll concentration using spectrophotometric methods. Autotrophic picoplankton were sorted into picoeukaryotes and cyanobacteria (Synechococcus) using flow cytometry; and Synechococcus were further partitioned into two types based on their pigment complement: those containing phycocyanin and those containing phycoerythrin. The total of all Vibrio species, including V. cholerae, was quantified employing quantitative real-time PCR techniques, as was the abundance of Escherichia coli, a bacterium that is often taken as indicative of faecal contamination. Other environmental variables such as temperature and pH were also monitored at the same time. Total chlorophyll concentrations in the lake varied between 3 µg/L and 71 µg/L, with the peak concentration recorded at one station during the pre-monsoon. The abundance of Synechococcus cells containing phycocyanin ranged from 10¹ to 10⁵ cells/ml, whereas those containing phycoerythrin ranged from 10¹ to 10⁴ cells/ml, and picoeukaryotes from 10² to 10⁴ cells/ml. The total Vibrio counts within the lake varied temporally, with concentrations of 7 x 10⁵ ± 3 x 10⁵ copies/ml during the pre-monsoon period, 7.5 x 10² ± 8.8 x 10² copies/ml during the monsoon season, and 1.4 x 10⁴ ± 2.2 x 10⁴ during the post-monsoon. We found that the log-transformed abundance of total Vibrio (copies/ml) had a positive linear relationship (r²=0.46) with the relative abundance of picoeukaryotes and a negative linear relationship (r²=0.57) with the relative abundance of phycocyanin-containing Synechococcus. Multiple linear regression indicated that the variability in the distribution of total Vibrio species in Vembanad Lake exhibited significant relationship (p<0.05) with temperature, picoeukaryotes, and pH. Similarly, multiple linear regression showed that the distribution of V. cholerae in the lake was significantly related (p<0.05) to E. coli, picoeukaryotes, pH, turbidity, and silicate. The high covariance between V. cholerae and E. coli could potentially indicate a common source for the two types of bacteria. In conclusion, this study highlights the potential of using picoplankton and temperature as proxies for mapping the distribution of Vibrio species in Vembanad Lake using satellite data. Ongoing research will focus on integrating these findings with Earth observation from space, as well as with data from drones equipped with hyperspectral sensors.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High-Resolution Spatio-Temporal Mapping of Air Temperature and Humidity in Padua (Italy) Using Satellite Data and Geographically-Temporally Weighted Regression

Authors: . Naila, Dr. Jacopo Vivian, Prof. Michele De Carli
Affiliations: Department of Industrial Enrineering, Università degli Studi di Padova, Padova, Italy
Due to rapid urbanization, cities worldwide are experiencing a unique climatic phenomenon known as the urban heat island (UHI) effect, where urban temperatures become significantly higher than those of surrounding rural areas. Consequently, air temperature (Ta) and relative humidity (RH) levels vary tremendously across space due to the uneven distribution of factors like vegetation cover, population density, urban morphology, and other local characteristics. UHIs can intensify heat-related health risks during heat waves; therefore, understanding the spatial and temporal distribution of air temperature (Ta) and relative humidity (RH) is crucial to accurately quantify (and hence prevent) heat-associated mortality rates. This study aims to determine the spatio-temporal patterns of air temperature and relative-humidity and their respective explanatory factors, such as land surface temperature (LST), normalized difference vegetation index (NDVI), digital elevation model (DEM), and solar zenith angle (SZA). To model correlations among all these, the study used two data-driven models: a geographically weighted regression (GWR) model and a geographically and temporally weighted regression (GTWR) model. GTWR is an extended version of the standard GWR model, where the weighting matrices embody both spatial and temporal information about the independent (explanatory) variables. In most cities, there are only a few meteorological stations available to measure Ta and RH, thereby limiting their applicability to vast areas. To overcome this problem, satellite-driven thermal (IR) images have been employed to estimate Ta and RH at high spatial and temporal resolutions. Both GWR and GTWR models were trained using LANDSAT-8/9 and MODIS satellite thermal imagery for the city of Padua, Italy. The model predicts Ta and RH for 10 years (2015 - 2024) for winter and summer months. The predicted weather data were compared with observations from 110 meteorological stations recently installed by the Municipality in various parts of the city. Results demonstrated that GTWR effectively predicts the spatial distribution of air temperature during hot summer days. However, the comparison revealed that certain weather stations, particularly those influenced by local anthropogenic heat sources, exhibit discrepancies that cannot be captured by the models. In conclusion, the model investigated in this study (GTWR) provides a reliable approach to identify heat-vulnerable areas, where mortality rates may increase due to heat, contributing to targeted interventions to protect at-risk populations and mitigate heat-related health impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Spatial Modelling of Mosquito Breeding Sites to Improve Larval Source Management

Authors: Fedra Trujillano, Zaida Quiroz, Najat Kahamba, Fredros Okumu, Emma Laurie, Brian Barrett, Kimberly Fornace
Affiliations: University Of Glasgow, Pontificia Universidad Catolica del Peru, Ifakara Health Institute, National University of Singapore
Rising temperatures, as a consequence of climate change, can lead to an expansion of the areas suitable for mosquito breeding habitats and increase the population at risk of infectious diseases transmitted by this vector. This expansion poses a threat to the current progresses of elimination and control of mosquito-borne diseases such as malaria. Due to the mosquito’s high dependency on the environment, the use of Earth Observation (EO) data is crucial to inform vector surveillance. Despite the current efforts for malaria elimination, additional strategies are needed. Among them, Larval Source Management (LSM) is recommended as a complementary intervention by the World Health Organisation (WHO). In the last decades, data-driven models using entomological and EO data have been developed to produce high risk area maps at the local scale. However, further insights on the utility of EO and spatial modelling is needed for an integrated operational framework for vector control. The availability of EO data and the advances in data-driven models could potentially improve breeding site identification and the development of cost-effective methods. The aim of this study is to explore the integration of EO data and ground-based survey information to build a predictive model for identifying potential breeding habitats. The study focuses on a case located in South-East Tanzania, a malaria-endemic region where water bodies were surveyed for larval presence during both the dry and rainy season from 2021 to 2023. The proposed methodology investigates the spatial correlation between larval positive water bodies and environment characteristics. This environmental information includes weather variables (temperature and rainfall extracted from ESA products) and high-resolution land cover characteristics. The land cover classes (water, buildings and forest) were extracted from high spatial resolution (4m) Planet Scope imagery using the Segment Anything Model. These datasets, combined with larval breeding site observations, are integrated into a comprehensive dataset of environmental covariates. The model is developed as a Bayesian spatial model, implemented using Integrated Laplace Approximation (INLA). The results provide insights into combining recent image processing foundation models with Bayesian spatial modelling for predicting fine-scale maps for larval source management purposes. The improvement of traditional vector surveillance methods will help monitor the expansion of mosquito habitats in changing environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Rising Sea Surface Temperatures and Marine Heatwaves in the Adriatic Sea: Implications for Mussel Aquaculture along the Abruzzo Coast, Central Italy

Authors: Romolo Salini, Susanna Tora, Federico Filipponi, Annamaria Conte, Carla Ippoliti
Affiliations: Istituto Zooprofilattico Sperimentale "G. Caporale" - Teramo, National Research Council (CNR)
Sea temperature is a critical parameter in aquaculture, directly influencing the growth and survival of molluscs. In central Adriatic Sea, along the Italian Adriatic coast, mussel farms are located nearshore, approximately between 1 and 3 km from the coastline, at a median average of 10 meters depth. Given the significant role of the aquaculture sector in Italian economy and its increasingly relevance as a source of high-quality food essential for a healthy population, we characterised the evolution of Sea Surface Temperature (SST) in the mollusc production areas. Although SST refers to surface temperature, which is influenced by the atmospheric temperature and its fluctuations, solar radiation, wind, and short-term weather conditions, in shallow costal waters, such as our study sites, the variation of temperature along the water column is minimal. Therefore the SST can be considered, with reliable and sufficient accuracy, as representative of mollusc environmental conditions. This study analysed SST trends in the Adriatic coastal waters of Abruzzo from 2008 to 2024 focusing also on marine heatwaves (MHWs). A MHW is defined as an anomaly of SST that exceeds the 90th percentile of the climatological baseline for at least five consecutive days. SST data were derived from high-resolution satellite products provided by the Copernicus Marine Service, namely Level-4 of “Mediterranean Sea High Resolution and Ultra High Resolution Sea Surface Temperature Analysis”, with a spatial resolution of 0.1° (approximately 1 km), a daily temporal frequency, and a coverage of the entire Mediterranean Sea and since 2008. Data were extracted from location representative of mussel farms and pre-processed using QGIS and R software. Using time-series analysis tools, daily SST data were decomposed into trend, seasonal, and random components. The Mann-Kendall test identified statistically significant warming trends, while linear regression on detrended data revealed an average annual SST increase of 0.027°C from 2008 to 2024. MHWs, increasingly frequent and intense, characterized the summers of 2023 and 2024, with maximum SSTs exceeding 30°C and persisting, in some cases, for over 60 consecutive days. The heatwaves of 2024 were notably more prolonged than previous events, such as the 34-day MHW in 2022 along the Abruzzo coastline. Autumn 2023 was charcterised by persistent SST anomalies, with elevated temperatures extending through October and November, indicating significant thermal inertia. These prolonged warm conditions likely compounded stress on marine life. Although over a relatively short period, this study highlights a significant upward trend in SST in central Adriatic Sea, aligning with similar finding reported across the Mediterranean region. In conclusion, prolonged increases in water temperature have become more evident in terms of both frequency and duration, with summers of 2023 and 2024 experiencing the most intense events recorded over the investigated period. This study highlights the importance of satellite-based observations, in particular their ability to provide long-term daily SST mapping products over wide geographic areas with a sufficient spatial detail for coastal water monitoring. Such data are essential for monitoring environmental conditions, supporting adaptive aquaculture management under climate change conditions.
LPS Website link: Rising Sea Surface Temperatures and Marine Heatwaves in the Adriatic Sea: Implications for Mussel Aquaculture along the Abruzzo Coast, Central Italy&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Understanding Leptospirosis in Rio Grande do Sul, Brazil: Climatic and Sociodemographic Insights

Authors: Dr. Andrea de Lima Oliveira, Dr. Ricardo Guimarães, Shubha Sathyendranath, Milton Kampel
Affiliations: Instituto Nacional De Pesquisas Espaciais, Instituto Evandro Chagas, Plymouth Marine Laboratory
Several infectious diseases have a seasonal pattern that may be related to climatic conditions, such as rainy seasons for water-associated diseases. For this reason, satellite Earth observations are useful for studying the main drivers of these outbreaks, especially when coupled with additional information on sociodemographic factors that are potentially useful for modeling and predicting outbreaks. Here, we study leptospirosis, a bacterial disease that can be transmitted from animals to humans, usually through the urine of infected rodents or other hosts, including livestock. The bacteria can survive in soil for many days. Infection occurs when the bacteria come into contact with the mucous membranes or exposed wounds of susceptible individuals. It is endemic in Brazil and its incidence is closely linked to humid climates that favor the survival of the pathogenic bacteria in the environment. Among the Brazilian regions, the South has the highest average annual incidence (3.89 per 100,000 inhabitants) and the highest number of confirmed cases over the last 17.5 years (2007 – June 2024), totaling 20,260 cases. This corresponds to 65.1 cases per 100,000 inhabitants in 2024. The state of Rio Grande do Sul alone accounts for 38% of cases in the southern region. This state has also experienced extreme weather events, such as heavy rainfall in September 2023 and May 2024, with devastating consequences for the population. This study analyzes monthly leptospirosis cases in municipalities in Rio Grande do Sul, using data from the Brazilian government (DATASUS). A time series analysis was performed to calculate the Shannon entropy index, which indicates the complexity of the case patterns. Correlations between leptospirosis incidence, cumulative rainfall (CHIRPS, Climate Hazards Group InfraRed Precipitation with Station), and mean land surface temperature (ERA5) were evaluated. Municipalities were classified according to the predictability of leptospirosis outbreaks, considering time series complexity and climatic correlations. In addition, sociodemographic factors, such as gender, age, education level, occupation, and exposure to specific risk factors (e.g., flooding, rodent contact, agricultural activities) were analyzed in relation to the predictability index. Spatial analysis showed that leptospirosis cases were clustered in certain regions. The Shannon entropy index of most municipalities was between 0.75 and 1, indicating complex, non-linear patterns in the time series. Rainfall correlations with incidence were consistently positive, while temperature correlations varied, being negative in some locations but predominantly positive. Of the 497 municipalities in the state, 267 (53.7%) were classified as having low incidence (≤ 5 cases per 10,000 inhabitants), while 43 municipalities (8.7%) had highly unpredictable patterns, classified as “difficult to predict”. The remaining 187 municipalities (37.6%) showed varying degrees of predictability (feasible prediction, FP), based on combinations of lower complexity (S), significant rainfall correlation (R), and significant temperature correlation (T). Sociodemographic analysis showed that leptospirosis cases were predominantly among men (about 80% in all classes) and adults aged 30-59 years (49%-61%), followed by young adults aged 19-29 years (10%-23%). Data on educational level were incomplete, but among the reported cases, those with a secondary school education represented 22% to 31% of the feasible prediction classes. Occupational data, although often not reported, suggested that agricultural workers were disproportionately affected in municipalities classified as feasible prediction, with lower Shannon index and correlation with temperature, these occupations were also high in the difficult-to-predict group. This highlights the potential role of occupational exposure in shaping seasonal incidence patterns in these municipalities. On the other hand, other occupations were more frequent in the classes correlated with rainfall, indicating that in the municipalities where accumulated rainfall was correlated with the incidence, the occupation was not concentrated in agriculture workers, but spread to other occupations. The risk situations were categorized into four groups: flood-related (i.e., contact with flood water or mud, or proximity to water), rodent-related (i.e., direct contact or signs of rodents nearby), work-related (i.e., farming or grain storage) and other (i.e., proximity to waste disposal, water tank or septic tank maintenance, vacant lot, etc.). Exposure to rodents was the most commonly reported risk across all classes (> 50% of cases). Work-related risks were particularly high in municipalities where seasonality and temperature correlations were significant, while flood-related risks dominated in municipalities with strong rainfall correlations. This study highlights the spatial and temporal variability of leptospirosis incidence in Rio Grande do Sul, revealing the multiple drivers of outbreaks, including climatic factors, occupational risks, and flood-related exposures. By classifying municipalities based on outbreak predictability, we provide valuable insights for tailoring public health interventions. Municipalities with clear patterns of seasonality and climatic correlations offer opportunities for predictive modeling, enabling proactive interventions such as early warnings, targeted education, and vaccination campaigns. Conversely, regions with unpredictable patterns may require more extensive surveillance efforts and resource allocation. These findings highlight the importance of integrating environmental, occupational, and sociodemographic data into leptospirosis prevention strategies to effectively reduce the public health burden of leptospirosis. This study is a contribution to the Waterborne Infectious Diseases and Global Earth Observation in the Nearshore (WIDGEON) project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Role of invasive macrophytes in enhancing the antimicrobial resistant pathogenic load in Vembanad Lake, Kerala, India

Authors: Jasmin Chekidhenkuzhiyik, S Sangeetha, Nada Mahamood, Dr Anas Abdulaziz, Emma Sullivan, Dr Nandini Menon, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, Trevor Platt Science Foundation, CSIR-National Institute of Oceanography, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre for Earth Observation, Plymouth Marine Laboratory
Eichhornia crassipes, a native of South America, has invaded almost all the freshwater bodies of the world and is generally regarded as the most troublesome aquatic plant. Wherever it has encountered suitable environmental conditions it has spread with phenomenal rapidity to form extensive monotypic blanket-cover in lakes, rivers and rice paddy fields. The proliferation of the weed stagnates the water as well as affects its quality. The plant is variable in size, reaching up to 1m in height under good nutrient supply. Roots develop at the base of each leaf and form a dense mass, varying in length from 20 cm to 300 cm. These dense mats of roots harbour pathogens and vectors. The problem of diseases caused by microbes transmitted by water is a major public health challenge, especially in low-middle income countries with drinking water shortage and poor sanitation conditions. Faecal contamination from non-point sources into water bodies lead to an increase in microbial pathogens especially faecal indicator bacteria Escherichia coli. Pathogenic strains of E.coli can cause several diseases among human population. We found that E.coli in the water of Vembanad Lake forms complex association with the roots of the invasive weed Eichhornia crassipes, many a time existing as biofilm embedded within a matrix of extracellular polysaccharide on the roots of the weed, thereby protecting themselves from toxic pollutants. E. coli isolated from the roots of water hyacinth were characterized and tested for resistance against antibiotics from different generations. The Multiple Antibiotic Resistance (MAR) index of each isolate was calculated. The results showed that most of the isolates exhibited multi-drug resistance which is a globally challenging threat. The issue of antimicrobial resistance among opportunistic pathogens plus the wide geographical distribution of aquatic weeds, pose serious health concerns. Integrated water quality monitoring equipped with mapping of floating weeds on water bodies are required for the mitigation of the problem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advanced ecosystem restoration: Blending phytoremediation with satellite-based and non-imaging-based remote sensing in the Himalayas of PIN Valley National Park, India

Authors: Abhinav Galodha, Deepika Sharma, Dr. Jagdeep Verma
Affiliations: School of Interdisciplinary Research (SIRe), Indian Institute of Technology, IIT Delhi, Department of Botany, Himachal Pradesh University (HPU), Department of Botany, Sardar Patel University
Heavy metal pollution presents a formidable challenge to global ecosystems, threatening biodiversity, soil and water quality, and human health. In regions with ecological sensitivity or limited access, traditional remediation techniques often fall short due to their resource-intensive nature and potential environmental disturbance. In response, phytoremediation emerges as an innovative and sustainable solution. Advanced remote sensing techniques, spanning proximal, airborne, and spaceborne data collection, enhance the prediction accuracy of contamination levels by correlating spectral reflectance data with metal concentrations. Proximal sensing, involving laboratory and field-based spectroradiometers combined with drone and satellite insights, permits exhaustive coverage and detail, crucial for monitoring shifts in land use and surface cover. Despite challenges, such as spectral complexity and atmospheric variability, spectral data delineates metal-induced stress markers in vegetation, underscoring phytoremediation's potential. In response, this study explores the viability of phytoremediation as an environmentally friendly alternative strategy, with a focus on Pin Valley National Park in Himachal Pradesh, India. Here, we target plant species that naturally accumulate heavy metals, effectively detoxifying the environment. To enhance the accuracy and efficiency of contamination assessment, we employed cutting-edge remote sensing technologies, integrating proximal, airborne, and space-borne data collection systems. Proximal sensing was conducted using a spectroradiometer that provided high-resolution spectral data directly from the field. This was supplemented by data from drones, which offered flexibility in covering large and varied terrains, and from satellites such as Landsat-8, Landsat-9, and Sentinel-2, which offered extensive temporal and spatial coverage. These tools enabled comprehensive monitoring of changes in land use and vegetation cover over an extended period from 2010 to 2023. The analysis used several indices to determine plant health and the extent of environmental degradation, including the Normalized Difference Vegetation Index (NDVI), Normalized Difference Red Edge (NDRE), and Soil-Adjusted Vegetation Index (SAVI). These indices were instrumental in evaluating vegetation vigour and health. Additionally, the Heavy Metal Index, Iron Oxide Index, and Hydrothermal Index were applied to directly measure contamination levels. Our results showed a significant correlation between heavy metal concentration and stress markers in vegetation. For instance, areas with high NDVI often coincided with low heavy metal presence, indicating healthier vegetation capable of successful metal uptake for phytoremediation. The species identified as most effective in this environment included Indian mustard (Brassica juncea) and hemp (Cannabis sativa), which demonstrated remarkable capacities to absorb lead (Pb) and cadmium (Cd), two of the most problematic contaminants in the area. Specifically, Brassica juncea achieved a biomass lead accumulation of up to 2,500 mg/kg, while Cannabis sativa showed cadmium uptake levels reaching 900 mg/kg, suggesting their efficacy for targeted phytoremediation practices. Small traces of heavy metals such as Yttrium (3-11 ppb), Strontium (20-32 ppb), Rubidium (0.050-0.155 ppb), and Cadmium (0.045-0.170 ppb) could be retrieved from the identified site locations. The application of remote sensing technology ensured precise mapping of these metal concentrations and plant health, thereby optimizing phytoremediation efforts. In addition to supporting immediate remediation efforts, the integration of remote sensing technology provided valuable longitudinal data, revealing trends in environmental recovery. Over the observed period, reclaimed lands showed a gradual increase in NDVI values, from an average of 0.35 to 0.65, indicating significant improvement in vegetation cover and health. These positive trends were corroborated by reductions in Heavy Metal Index values, confirming a decrease in soil and water contamination levels to safer thresholds over the decade-long study. Moreover, the study's findings underscore the critical role of remote sensing in ongoing environmental monitoring. By processing and analyzing collected data, we were able to identify contamination hotspots rapidly, optimize plant selection and placement, and ensure efficient resource allocation. The proximate sensor data aligned closely with drone and satellite data, ensuring consistent and reliable results across the various scales and technologies applied. In conclusion, this research affirms the feasibility and effectiveness of combining phytoremediation with remote sensing technologies to manage and mitigate heavy metal contamination. Our approach offers a scalable framework for environmental monitoring, capable of being adapted to various ecological contexts and contaminant profiles. The successful application of this methodology in Pin Valley National Park illustrates its potential as a practical tool for supporting the restoration of similar polluted regions worldwide, ultimately promoting ecological resilience and sustainability. Continued refinement and adaptation of these technologies hold the promise of enhancing global efforts to combat heavy metal pollution and support sustainable land management practices. This study not only contributes to academic knowledge but also offers actionable insights for policymakers and environmental managers committed to preserving natural ecosystems. Keywords: Environmental Monitoring, Metal Contamination, Phytoremediation, Pin Valley NP, Hyperspectral, Normalized Difference Red Edge Index (NDRE), Normalized Difference Vegetation Index (NDVI), Soil-adjusted Vegetation Index (SAVI), Strontium, Rubidium, Yttrium.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Earth observation measurements and spatio-temporal deep learning modelling to predict infectious disease outbreaks in South Asia case study from 2000 to 2017.

Authors: Dr Usman Nazir, Talha Quddoos, Dr Momin Uppal, Dr Sara Khalid, Dr Rochelle Schneider
Affiliations: Lahore University of Management Sciences, Center for Statistics in Medicine, University of Oxford, ESA, London School of Hygiene & Tropical Medicine
# Background: Malaria remains one the leading communicable causes of death. Approximately half of the world’s population is considered at risk, predominantly in African and South Asian countries. Although malaria is preventable, heterogeneity in climatological, socio-demographic, and environmental risk factors over time and across geographical regions make outbreak prediction challenging. Data-driven approaches accounting for spatio-temporal variability may offer potential for region-specific early warning tools for malaria. # Methods: We developed and validated a data fusion approach to predict malaria incidence in the South Asian belt spanning Pakistan, India, and Bangladesh using geo-referenced environmental factors. For 2000-2017, district-level malaria incidence rates for each country were obtained from the US Agency for International Development's Demographic and Health Survey (DHS) datasets. Environmental factors included temperature (Celsius), rainfall (millimeter), and Average Normalized Vegetation Index, obtained from the Advancing Research on Nutrition and Agriculture (AReNA) project conducted by the International Food Policy Research Institute (IFPRI) in 2020. Data on nighttime light pollution was derived from two satellites: NOAA DMSP OLS: Nighttime Lights Time Series Version 4, and VIIRS Nighttime Day/Night Band Composites Version 1. A multi-dimensional spatio-temporal LSTM model was developed using data from 2000-2016 and internally validated for the year 2017. Model performance was measured using accuracy and root mean squared error. Country-specific models were produced for Bangladesh, India, and Pakistan. # Results: Malaria incidence in districts across Pakistan, India, and Bangladesh was predicted with 80.6%, 76.7%, and 99.1% accuracy, respectively. In general, higher accuracy and reduced error rates were attained with increased model complexity. # Interpretation: Malaria outbreaks may be forecasted using remotely-measured environmental factors. Modelling techniques that enable simultaneously forecasting ahead in time as well as across large geographical areas may potentially empower regional decision-makers to manage outbreaks early and more accurately. # Funding: NIHR Oxford Biomedical Research Centre Programme. Additionally, we would like to acknowledge funding support provided by the Higher Education Commission of Pakistan through grant GCF-521. # Contributions: The study was conceived and designed by SK, UN, and MU. Data curation and analysis was performed by UN and MTQ. It was interpreted by all co-authors. The abstract was written by UN and SK and revised by all co-authors. SK is responsible for the overall study. # Declaration of Interests: SK is supported by the Innovative Medicines initiative, Bill & Melinda Gates Foundation, Health Data Research UK, British Heart Foundation, and Medical Research Council and Natural Environment Research Council outside of this work.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Health Impact Assessment with Air Quality Data from IoT/Low-Cost Sensors

Authors: Dr Christian Borger, Julie Letertre, Thomas Hodson, Ulrike Falk, Dr Rochelle Schneider, Dr Vincent-Henri Peuch
Affiliations: European Centre for Medium-Range Weather Forecasts (ECMWF), European Space Agency (ESA)
Understanding and mitigating the health impacts of air pollution requires accurate and very high resolution data on air quality, particularly in urban environments where pollutant levels vary significantly over scales of 100m or less. Traditional monitoring networks, while reliable, are often limited in spatial coverage due to high operational costs and maintenance requirements. As a result, critical gaps remain in our ability to assess local air quality and its effects on public health. Measurements from low-cost sensors, for instance from citizen science projects, represent a promising solution to these challenges. These Internet of Things (IoT) based observations can provide deeper insights into local-scale air pollution, though they come with certain limitations. In this study, we demonstrate the capabilities and benefits of these novel sensor measurements for health impact assessments. In particular, we focus on their ability to provide hyper-local information on health indicators in comparison to traditional approaches, using selected use cases to highlight their value in advancing public health assessments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using satellite observations to improve air quality through policy relevant research

Authors: Rosa Gierens, Hubert Thieriot, Lauri
Affiliations: Centre For Research On Energy And Clean Air
Air pollution is harmful to human health, causing illness and premature death. The Centre for Research on Energy and Clean Air (CREA) is an independent research organisation that uses scientific data, research, and evidence to support the efforts of governments, companies, and campaigning organisations worldwide to move towards clean energy and clean air. Our purpose is to provide data and analysis relevant to the current decisions being made. Satellite observations are a great asset, as they provide continuous monitoring of key air pollutants on a global scale, independently of the existence or access to ground-level monitoring. However, some technical skills are required to work with earth observation datasets, especially for more advanced analyses. CREA is therefore building its capacity to provide data and analysis derived from satellite data, to bring the right results to the right people at the right time. In many countries accurate air pollution emission monitoring is missing, making it difficult to develop effective policies or to enforce air pollution regulations. Furthermore, emission data is required for conducting health and environmental impact assessments. Various approaches for estimating emissions from satellite data can be found in the scientific literature. At CREA, we have implemented the flux divergence method following Beirle et al., (2023, https://doi.org/10.5194/essd-15-3051-2023) to estimate NOx emissions using TROPOMI NO2 and horizontal wind fields from ERA5. We chose the flux divergence method because it provides good accuracy for point sources, and is computationally relatively inexpensive. In some regions of interest in Southeast Asia, the low availability of TROPOMI NO2 data is challenging the emission estimates, and we are currently working on finding ways to mitigate this issue. Furthermore, we are evaluating our NOx point source emissions against other data sources from the USA, EU, Taiwan, and Australia. Preliminary results suggest that the agreement between the top-down and bottom-up emission estimates is in the expected range. Datasets for NOx emissions derived from satellite data already exist in the public domain, but they are often made available with considerable delay. In rapidly developing regions the timeliness of the data is particularly important. For example, in Southeast Asia, we can identify several point sources in 2023 for locations at which no NOx emissions could be detected for previous years. Our main goal is to make timely NOx emission data available to the public for the health impact assessments conducted at CREA. Furthermore, we aim to be able to identify facilities that have (not) installed pollution abatement technologies, to detect possible breaches in regulations. In the future, we plan to also use the flux divergence method for estimating point source SO2 emissions. In other work, we also utilise satellite observations to understand air pollution on a regional scale. We use readily available L3 data products for SO2, NO2, and aerosol optical depth to describe the spatial distribution of pollutants and changes over time. For the most recent example of how satellite data can provide valuable context for other analyses, see Manojkumar (2024, https://tinyurl.com/creafdg).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: AIR4health: Leveraging Earth Observation for Compound Climate and Air Quality Extremes Early Warning

Authors: Dr. Ana Oliveira, Paulo Nogueira, Vital Teresa, Luís Figueiredo, Élio Pereira, Rita Cunha, Fabíola Silva, Inês Girão, Amaya Atencia Yepez, Ana Margarida Alho, Maria Miguel
Affiliations: CoLAB +Atlantic, ISAMB, Faculty of Medicine, University of Lisbon, GMV
Climate resiliency is a key challenge of the 21st century, as the impacts of climate change and weather extremes come forth as pressing challenges. Nonetheless, public health stakeholders still struggle to take advantage of the state-of-the-art scientific knowledge on geospatial data science considering that many dose-response uncertainties remain regarding (i) the best practices in mapping the exposure to environmental and climate-induced hazards, (ii) the attribution and measurement of correlated impacts and (iii) our ability to produce meaningful future impact assessment scenarios. That is the case of Compound Climate and Air Quality Extremes. As extreme temperatures are already becoming significantly more frequent and severe in Portugal in the last decades (and overall, in Europe), their occurrence has translated into significant cases of excess human mortality and morbidity, also in tandem with the simultaneous air quality loss, with corresponding human and societal impacts. However, such impacts have only been documented in a case-specific manner, i.e., describing the consequences of Cold and Heat Waves (CW and HW, respectively) and low Air Quality (AQ) events separately, or by focusing on very specific events and locations, limiting the ability of the public health sector in deriving meaningful and generalisable policy guidelines for early warning and response actions. To tackle these challenges, the ESA-funded AIR4health project, part of the Early Digital Twin Components initiative, aims to develop innovative Earth Observation (EO) data-driven algorithms. These AIR4health Risk Algorithms will predict human mortality and morbidity using Machine Learning (ML) and Artificial Intelligence (AI) models. The project will also create a prototype Digital Twin Component (DTC) for early warning of compound extreme climate and air quality events. The main goal of AIR4health is to create two algorithms, called AIR4health Risk Algorithms, that can predict the risk of increased mortality and illnesses due to extreme climate and air quality events. These algorithms will use data from mainland Portugal and will help in developing a European system to monitor heatwaves, cold waves, and air quality. AIR4heath Risk Algorithms will consider the following facts: ● Use Case 1: Heat and Ozone: During a heatwaves, high temperatures increase the rate of photochemical reactions in the atmosphere. These reactions involve pollutants emitted from vehicles, industrial processes, and other sources reacting with sunlight. One significant consequence is the formation of ground-level ozone (O3), a major component of smog. Ozone formation is enhanced during heatwaves due to increased emissions of precursor pollutants like nitrogen Oxide (NOx) and volatile organic compounds (VOCs). These precursors undergo reactions facilitated by sunlight and heat, leading to the production of O3. Health illnesses due to concurrence of heatwaves and excessive O3 concentrations include respiratory (e.g., asthma, bronchitis, inflammation/irritation of airways, shortness of breath), cardiovascular (e.g., strokes, oxidative stress) and heatstroke issues. ● Use Case 2: Cold and Nitrogen Dioxide: During a coldwave, combustion processes, such as those in vehicles and heating systems, increase to meet the greater demand for warmth. This leads to higher emissions of nitrogen oxide (NOx), primarily nitrogen dioxide (NO2). Cold temperatures can enhance the stability of the atmosphere, trapping pollutants close to the ground and prolonging their presence. Additionally, calm wind conditions during coldwave can further exacerbate air pollution by limiting the dispersion of pollutants. Health illnesses due to concurrence of coldwaves and excess NO2 concentrations include respiratory (e.g., asthma, bronchitis, respiratory infections), cardiovascular (e.g., heart attacks, arrhythmias) and hypothermia. To develop these two AIR4health Risk Algorithms, the AIR4health consortium will utilize a highly detailed, two-decades-long healthcare database for Portugal mainland, which is already available to them. They will combine this data with Earth Observation (EO), modelled, and in-situ data to create two AIR4health Use Cases of dose-response indicators for Compound Climate and Air Quality Extremes. Built upon a solid foundation of the currently operational country-level Ícaro warning system, the project will focus on creating a detailed daily time series of Compound Climate and Air Quality Extremes (Heat and Cold-related, separately). This will be achieved by downscaling satellite data products (e.g., the Sentinel-5p mission), using ancillary in-situ measurements from the European Environmental Agency (EEA) as well as modelled data from Copernicus Atmosphere Monitoring and Climate Change Services (CAMS, C3S). Machine Learning (ML) models, similar to those used for air temperature in Lisbon, will be employed for this downscaling process. The 'Dynamic' and 'Continuous' aspects distinguish the AIR4health approach from the state-of-the-art. AIR4health will introduce two novel algorithms to the Portuguese public health and weather sectors, demonstrating the effectiveness of EO data to enhance the spatial level of detail and the ability to predict health outcomes from heatwaves and cold waves and poor Air Quality events. And it will transition from a non-spatial/single time-series to spatiotemporal approach, up to the municipal-level. Furthermore, results will be benchmarked with European-level data to pave the way towards broader adoption. This highlights the consortium’s commitment to use these cases in Portugal as a model for the international community on how to address Planetary Health and Climate Change Preparedness. On a larger scale, the European Union (EU) “Destination Earth” (DestinE) and European Green Deal initiatives will benefit from the AIR4health outcomes, as these stressors directly impact our communities, aiding the evolution towards the federation of local-specific services integrated into the European Digital Twin ecosystem.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote Sensing of Mental Health: the Effects of Heat Stress on Mental Health in Switzerland

Authors: Ella Schubiger, Jennifer Susan Adams, Susanne Fischer, Maria J. Santos, Kathrin Naegeli
Affiliations: Department of Geography, University of Zurich, Department of Psychology, University of Zurich
Recent summers have been marked by record-breaking heat across many regions of the world. The Intergovernmental Panel on Climate Change (IPCC) reports that the increase in the number of hot days and nights, as well as the length, frequency, and intensity of warm spells or heatwaves over most land areas, is virtually certain. Furthermore, the frequency and severity of extreme heat events are very likely to rise nonlinearly with increasing global warming. While the physiological risks of heat stress on humans are well-known, its impacts on mental health are not yet widely recognised. Establishing connections between temperature observations and mental health datasets is therefore essential to improve our understanding and to develop strategies to address the challenges posed by climate change, rising global temperatures, and more frequent extreme events. Meteorological station data and gridded products provide valuable resources for examining heat stress occurrences at local and regional scales. Additionally, Earth Observation (EO) data on Land Surface Temperature (LST) enable spatial and temporal analyses of heat stress events across varying scales. Surveys providing assessments of mental health problems and stress alongside demographic and socioeconomic data provide insight into the state and evolution of individuals’ mental health. Integrating these distinct datasets, however, presents significant challenges, yet it holds the potential to shed light on the societal impacts of heat stress on mental health. In this contribution, we focus on available LST products based on EO data from e.g. MODIS, Landsat or ECOSTRESS. Additionally, spatial climate analysis datasets from MeteoSwiss provide daily temperature, precipitation and other climate variables at a 1x1 km spatial resolution, spanning multiple decades. These datasets integrate observations from nearly 160 automatic weather stations, as well as radar and satellite data, to deliver a robust climate monitoring system. Data on mental health is obtained through the Swiss Health Surveys (SHS), conducted every five years and comprising seven survey rounds over a 30-year period. Each survey includes approximately 18,000 individuals aged 15 and older and increasingly adheres to the European Health Interview System (E-HIS) framework to ensure international comparability. This wealth of data serves as the foundation for our analysis. The contribution will present preliminary results on two main objectives of this project. First, we investigate the link between mental health in Switzerland and extreme heat events (derived from EO and weather station data, and human heat stress indices such as the Heat Vulnerability Index) over the past 30 years. Second, we perform spatial analyses over the datasets and show that links between mental health and extreme events/heat stress depend also on urban-rural differences (attributed to urban heat island effects) and urban planning (e.g., access to green spaces). To our knowledge, such a combined temporal and spatial analysis has not been conducted in Switzerland before. This work aims to provide new insights of the potential of EO data to assess spatial and temporal mental health issues by integrating diverse data sources. An improved understanding of impacts of environmental heat extremes on human mental health is urgently needed to develop adaptation and mitigation strategies for our societies facing a warming and increasingly extreme climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From data to action: a machine learning model to support tick-borne encephalitis surveillance and prevention in Europe

Authors: Giovanni Marini, Francesca Dagostin, Diana Erazo, Giovanni Marini, Daniele Da Re, Valentina Tagliapietra, Maria Avdicova, Tatjana Avšič – Županc, Timothée Dub, Nahuel Fiorito, Nataša Knap, Céline M. Gossner, Jana Kerlik, Henna Mäkelä, Mateusz Markowicz, Roya Olyazadeh, Lukas Richter, William Wint, Maria Grazia Zuccali, Milda Žygutienė, Simon Dellicour, Annapaol Rizzoli
Affiliations: Research and Innovation Centre, Fondazione Edmund Mach, Spatial Epidemiology Lab, Université Libre de Bruxelles, Center for Agriculture Food Environment, University of Trento, Regional Authority of Public Health in Banská Bystrica, Institute of Microbiology and Immunology, Faculty of Medicine, University of Ljubljana, Department of Health Security, Finnish Institute for Health and Welfare, Unità Locale Socio Sanitaria Dolomiti, European Centre for Disease Prevention and Control (ECDC), Austrian Agency for Health and Food Safety, Environmental Research Group Oxford Ltd, c/o Dept Biology, Azienda Provinciale Servizi Sanitari, Dipartimento di prevenzione, National Public Health Center under the Ministry of Health
Background Tick borne encephalitis (TBE) is a severe zoonotic neurological infection caused by the TBE virus (member of the Flaviriridae family) and it is one of the most important tick-borne viral diseases in Europe and Asia. The infection is mostly acquired after a tick bite, but alimentary infection is also possible. Despite the availability of a vaccine, TBE incidence is increasing with the appearance of new foci of virus circulation in new endemic areas. The increase in TBE cases across Europe - from 2412 in 2012 to 3514 in 2022, has highlighted the need for predictive tools capable to identify areas where human TBE infections are likely to occur. In response, this study presents a novel spatio-temporal modelling framework that provides annual predictions of the occurrence of human TBE infections across Europe, at both regional and municipal levels. Methods We used data on confirmed and probable TBE cases provided by the European Surveillance System (TESSy, ECDC) to infer the distribution of TBE human cases at the regional (NUTS3) level during the period 2017-2022. We trained the model on data from countries with sufficient reporting, i.e., that provided the location of infection at the NUTS-3 level for at least 75% of cases notified during the selected period. To account for the natural hazard of viral circulation, we included variables related to temperature (derived from satellite images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) and supplied by NASA with a resolution of 5.6 km), precipitation (derived from the ECMWF ERA5-Land dataset at 30 arc seconds resolution), land cover (extracted from the 2018 Corine Land Cover (CLC) data inventory (class “3.1”) with a resolution of 0.25x0.25 km) and ticks’ hosts presence (originally produced using random forest and boosted regression trees approaches). We also used indexes based on recorded intensities of human outdoor activity in forests (based on the OpenStreetMap database) and population density (obtained from WorldPop) as proxies of human exposure to tick bites. We identified the yearly probability of TBE occurrence using a spatio-temporal boosted regression tree modeling framework. Results Our results highlight a statistically significant rising trend in the probability of human TBE infections not only in north-western, but also in south-western European countries. Areas with the highest probability of human TBE infections are primarily located in central-eastern Europe, the Baltic states, and along the coastline of Nordic countries up to the Bothnian Bay. Such areas are characterised by the presence of key tick host species, forested areas, intense human recreational activity in forests, steep drops in late summer temperatures and high precipitation amounts during the driest months. The model showed good predictive performance, with a mean AUC of 0.85, sensitivity of 0.82, and specificity of 0.80 at the regional level, and a mean AUC of 0.82, sensitivity of 0.80, and specificity of 0.69 at the municipal level. Discussion With ongoing climate and land use changes, the burden of human TBE infections on European public health is likely to increase, as trends are already indicating. This underscores the need for predictive models that can help prioritize intervention efforts. Hence, the development of a modeling framework that predicts the probability of human TBE infections at the finest administrative scale based on easily accessible covariates, represents a step forward towards comprehensive TBE risk estimation in Europe.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Synergy of extreme weather and socio-economic factors in improved understanding and prediction of water associated diseases in India: A machine learning and Bayesian statistics approach

Authors: Ranith Rajamohananpillai, Farzana Harris, Dr Nandini Menon, Dr Anas Abdulaziz, Grinson George, Gemma Kulk, Dr Shubha Sathyendranath
Affiliations: Nansen Environmental Research Centre India, CSIR-National Institute of Oceanography, ICAR-Central Marine Fisheries Research Institute, Earth Observation Science and Applications, Plymouth Marine Laboratory, National Centre of Earth Observation, Plymouth Marine Laboratory
The increasing frequency and intensity of extreme weather events such as flooding, heavy precipitation, and coastal inundation significantly affect public health, as it worsens the spread of water-associated diseases (cholera, leptospirosis, and acute diarrheal diseases). Extreme weather events, when combined with socioeconomic factors, could create complex and spatially heterogeneous patterns of disease incidence that demand advanced analytical frameworks for their effective prediction and mitigation. This study combines machine learning, Bayesian spatial statistics and advanced computational models for predicting the relationships between extreme weather events, socioeconomic factors, and the associated waterborne disease occurrences in India. Data used in this study were obtained from models, Government open data portals, and from Earth observation. Weekly record of water-associated diseases for a period of 15 years (2009-2023) was taken from the Integrated Disease Surveillance Programme (IDSP). Daily precipitation was obtained from Climate Hazards Center InfraRed Precipitation with Station (CHIRPS) data version 2.0. Duration and frequency of flooding, as well as the area inundated, were estimated from the global flood database, and from Sentinel-1-SAR product. Socioeconomic data on population density, sanitation conditions and income levels were collected from NASA’s Socioeconomic Data and Application Center (SEDAC). Vulnerability of a particular district or area was assessed using a multi-layered approach coupling Bayesian hierarchical models for spatial risk mapping, with machine learning methods such as neural networks and random forests. Additionally, machine learning algorithms were also used to evaluate feature importance, predictive accuracy optimisation, and to identify emerging hotspots that are at risk of increased water-associated diseases. The integrated model identified several disease hotspot clusters across India. In addition, the study helped to link climate change induced extreme weather events in one place with outbreaks of water associated diseases in a different place, where the former is connected to the latter through water circulation. A noteworthy finding from this study is the inferred link between high prevalence of cholera in the coastal districts of West Bengal following heavy rainfall in the upstream districts and flooding in the watershed. Similarly in Punjab, a surge in cholera cases during 2016 appeared to be linked to flooding in Lahore and dynamics of Lahore- Punjab watershed. In both states, the affected regions were characterised by high rural population density, low-income levels, poor sanitation and poor primary healthcare facilities. Preliminary results showed that this integrated framework effectively predicted disease vulnerability and hotspots with high accuracy (~80%), that could potentially support targeted public health interventions and resource allocation. Moreover, the Bayesian framework also provided quantification of uncertainty within predictions, providing strong, interpretable risk assessments important for policy making and water-associated disease management. By integrating advanced computational techniques with regional weather and socioeconomic datasets, this study highlights the importance of interdisciplinary approaches in dealing with climate-sensitive public health challenges.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Analyzing Cholera Outbreaks: Dynamics, Risks, and Response Measures

Authors: ASWIN SACHIDANANDAN, Dr Neelam Taneja, Shubha Sathyendranath, Dhritiraj Sengupta
Affiliations: Post Graduate Institute of Medical Education and Research, Plymouth Marine Laboratory (PML), National Centre for Earth Observation, Plymouth Marine Laboratory,
Cholera is a contagious illness resulting from Vibrio cholerae and is spread through the faecal-oral route. The primary sources of V. cholerae outbreak infections in humans are contaminated water and food, which serve as the most frequent vehicles, and cholera toxin has been identified as a factor in its virulence. The spread of the disease is facilitated by overcrowding, insufficient sanitation, poor hygiene, and the absence of safe drinking water. As a result, it continues to be a significant health concern in developing nations, where it is present as an endemic infection and can also lead to outbreaks. In this study we investigated outbreaks of cholera in Kumbra village of sec-46, Mohali from 2nd July to 3rd August 2024 and Morinda, Rupanagar of Punjab, India from 19th August to 23rd of August 2024. The total number of persons hospitalised with acute watery diarrhoea were 89 for Kumbra Village, and 26 cases for Morinda. We obtained data from local hospitals, concerned health authorities and also by implementing citizen science program. Water samples were collected from suspected contaminated water sources. We observed that the areas were overcrowded and they lacked proper water supply. Water samples were filtered, tested for coliforms and V. cholerae. Strains were identified by MALDI-TOF, RT-PCR and serotyping. A total nutrient analysis was done for both locations. Cultures were positive in all 12 of the stool samples from suspected patients. Affected locations were visited for collecting information data for source of water, sanitisation, leakage or break in latrines/sewage pipelines, their pictures and results were recorded separately. Out of 14 water samples collected from these houses 60% of them had brown colour and bad odour. From the citizen science data, we observed 70% of the houses were using municipal pipeline water for drinking and basic household activities, the rest 30% were depending on other sources like own well, handpump and tanker water supply. Also, according to citizens the age and gender group that is highly affected by diarrhoea are women between the ages of 19 and 60. From the total nutrient analysis done for Nitrite, Silicate, phosphate and Sulphate, all the 14 samples which was collected from Kumbra showed high levels of silicate concentration (246.56µM), followed by sulphate(221.62 µM), nitrite(18.89 µM) and phosphate(1.96 µM). Whereas water samples from Morinda were having less concentration i.e., less than 20µM of all the above nutrients. RT-PCR was positive for V. cholerae in water samples collected from both regions. Cholera remains a significant problem among the communities which lack adequate sanitation and proper access to dinking water facility or where there are municipal pipelines or own wells but not properly installed. Citizen science survey also supported the argument that these houses lack proper sanitization. We plan to study the effect of nutrients on growth of V. cholerae and study the effect of various climatic factors on seasonality of cholera by remote sensing. Keywords: Cholera, North India, Environmental Vibrio, Acute Diarrhoeal Disease, Epidemics
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Spatial and temporal detection of gold panning sites by remote sensinG

Authors: Poullo Baidy Ba, Paul Passy, Emilie Lavie, Laurent Bruckmann
Affiliations: Phd Student, Senior Lecturer, Senior Lecturer with habilitation, Research Scientist
Gold mining, particularly artisanal gold mining, has been practiced for several decades in West Africa. Senegal and Mali are among the countries that have experienced significant gold rushes, particularly during the droughts of the 1970s and 1980s. In Mali, gold panning dates back to the 13th century and developed under the Manding Empire (Boukaré, s. d.). Today, Mali is Africa's third-largest gold producer, after South Africa and Ghana, with annual production estimated at 60 tonnes in 2018 (Boukaré, s. d.) and 65 tonnes in 2023 . Mining activities are particularly concentrated in the gold-rich regions of Kayes, Koulikoro, and Sikasso. In Senegal, gold panning is mainly carried out in the southern regions, notably in Kédougou, and extends towards Bakel along the main tributary of the Senegal River. This activity involves local, foreign, and multinational gold miners. Both Mali and Senegal face numerous challenges associated with mining, particularly artisanal, semi-mechanical, and industrial operations. Gold-panning sites are often located along the banks of the Faleme River, the main tributary of the Senegal River. The Faleme River is a highly coveted resource with diverse uses: agriculture, fishing, livestock, domestic needs, energy production, and, of course, gold mining. An estimated 387,895 people, or 55,414 households, depend directly or indirectly on the income generated by its waters and sub-tributaries (« Appel-de-Keniéba.pdf », s. d.). In response to these challenges, remote sensing offers a promising solution for mapping and monitoring the evolution of both legal and illegal gold-panning sites within this watershed. High-resolution satellite imagery, such as Sentinel-2, makes it possible to monitor the spatial and temporal evolution of these sites. Remote sensing provides a comprehensive view of site conditions using vegetation, water, and bare soil indices, combined with machine learning techniques. This approach reduces the cost and time required for exhaustive mapping while enabling dynamic analysis of the affected areas. The aim of our study is to demonstrate the feasibility of using remote sensing images to accurately map gold panning areas, and to illustrate the evolution of these areas over time and space within the transboundary watershed of the Faleme River. A gold-panning site is often characterized by the presence of turbid water bodies and bare ground. Water and bare soil indices are particularly useful for detecting such sites. By analyzing Sentinel-2 images, we can track the temporal and spatial distribution of these areas along the Faleme River, observing changes in these indices over time. This method provides a detailed overview of the extent of gold panning, its evolution, and its environmental impacts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From Contamination to Clarity: An Assessment of Water Quality and Public Health Risks in Lake Vembanad, India

Authors: Ms Ancy C Stoy, Dr. Grinson George, Dr Nandini Menon N, Dr Ranith R, Dr Jasmin C, Dr Anas Abdulaziz, Dr Shubha
Affiliations:
Availability of clean water and sanitation for all has been identified by the United Nations General Assembly as one of the goals (SDG 6) to be achieved by the year 2030 for sustainable development. Clean and clear water reflects health of aquatic ecosystems and their interconnected impact on human well-being. Climate change has redefined the perspective on water transparency. Secchi Depth (ZSD), previously regarded solely as an optical property of water, is now regarded as a critical ecological indicator, as it serves as a simple and reliable proxy for assessing water transparency and overall water quality of aquatic systems. This study investigates the relationship between Secchi depth and Forel-Ule (FU) colour index with the prevalence of pathogens causing waterborne diseases in Vembanad Lake, Kerala, India. Since 2019, in-situ measurements of Secchi depth have been collected across multiple sites in the lake using a 3D printed Mini Secchi Disk fitted with FU scale to assess variations in water colour and clarity. This ongoing data collection, continuing through 2024, is part of a citizen science program, in which local people and university students are involved in water quality monitoring. Preliminary analyses revealed significant spatial and temporal variations in ZSD, suggesting potential links to pollution, algal blooms, sewage discharge, resuspension of sediments and churning of water column. ZSD measurements as low as 0.01 m and as high as 4.4 m have been recorded. Likewise, the FU values ranged from 12 to18 across the Vembanad Lake, with occasional high values of 21. The extreme values of ZSD and FU index indicate fluctuations beyond the normal seasonal variations in water clarity and colour, highlighting potential shifts in water quality over time. The mixing of septic sewage with natural waters is a common occurrence in Vembanad Lake. Our microbiological studies have estimated the abundance of pathogenic bacteria such as Vibrio cholerae, Leptospira and Escherichia coli, in the Vembanad Lake year round. Reduced water clarity, as indicated by lower Secchi depth, and high FU values could be considered as indications of water contamination, during times of septic sewage mixing and extreme climate conditions, pointing to the potential health risks posed by pathogens like Escherichia coli. Also, the abundance of Vibrio spp. was significantly higher during the southwest monsoon of 2018, coinciding with a once-in-a-century flood event in the lake. The study underscores the importance of regular monitoring of water quality. Secchi depths and FU values are easier to understand than the sophisticated water quality measurements for non-scientists and managers who routinely monitor and make decisions to promote the aesthetic value of an aquatic area, and on its public health and economic importance, by limiting public exposure to contaminated waters and by reducing the risk of disease outbreaks.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Operational surveillance of environmental factors associated with Dengue transmission at country level in Argentina: Can a few parameters alerts about dengue outbreaks?

Authors: PhD Elisabet Benitez, MsD. Pablo Zadder, Lic. Julieta Motter, Ximena Porcasi
Affiliations: Conae_ Instituo Gulich
In Argentina, Dengue Environmental risk model has provided information at country level for 10 years, at CONAE geoserver web. It works on daily temperature data from LST (MODIS) to estimate the possible cycles of Aedes aegypti development and subsequent mosquito infection with the Dengue Virus. The model is based on simple arguments of immature stage development, hatching and possible female infection considering the incubation cycles of the virus (Porcasi et. al 2012). The output can be considered as the environmental threat component of Dengue transmission risk. It is updated annually with the observation of last year's temperature data for localities with more than 5,000 inhabitants. Here we analyzed the changes among the last 3 years in agreement with the worst registers of Dengue cases in the country. Thus, for 2022, the model recorded a lower number of complete cycles for the maximum observed temperatures of 32 with an average of 13; while in 2023 it recorded values higher than 33 complete cycles in some localities and an average of 14.7. These increases are not homogeneous for all localities in the country. Thus, 67.4% of the localities increased their environmental threat by one or more cycles with respect to the previous year and only 11.6% decreased the threat. Depending on the temperature, each cycle can be considered as an exposure of approximately 13 days (13 days more per year than the previous one), reaching up to 3 cycles (45 days more per year) in some places. Consequently, since 2009, Argentina has shown intermittent and increasing outbreaks of the disease covering more and more localities/cities and more cases: specifically, the incidence went from 2 cases per 100,000 inhabitants in 2022 to 123 cases per 100,000 inhabitants in 2023; while until October 2024 the incidence already accumulates 1262 cases/100,000 ha. Spatially, during 2022 cases were reported in 35 departments while in 2023 there were 246 administrative units reporting cases. The south-western geographic expansion of notifications coincides with the increase in the number of possible cycles in the same direction of the environmental threat maps. Here we shows the association of the latest Dengue outbreaks at the country level with a simple model based on the influence of temperature (LST) for mosquito and virus cycling. This demonstrates the usefulness of using EO-derived products at national and regional levels for sourveillance of Dengue and other arboviral diseases. In addition, improved, alternative EO products from this system are proposed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: ENgaging Researchers and coastal population In Communicating ocean’s role on human Health (ENRICH)

Authors: Dr Roshin P. Raj, Dr Ranith Rajamohanan Pillai, Dr Pratheesh C Mammen, Mr Ramesh Krishnan, Dr Nandini Menon, Mr Lasse Pettersson
Affiliations: Nansen Environmental and Remote Sensing Center, Bjerknes Center for Climate Research, Nansen Environmental Research Centre (India), Foundation for Development Action
The coastal areas of the developing/under-developed countries with high population density are the most vulnerable to climate change, even though climate related changes in the ocean and their impacts are global. Climate related changes in the ocean such as sea level rise, and tidal flooding, drive these coastal communities to live in unhygienic and unsanitary conditions, making them vulnerable to water associated diseases. Another indirect influence of the climate-related changes in the ocean on human health, is via the increased occurrences of Harmful Algal Blooms (HABs). The cyanobacterial HABs, in addition to its adverse impact on fisheries and thus on the economic situation of the coastal communities, also have an established affinity to Vibrio cholerae, thereby increasing the vulnerability of the region to water-borne diseases. Making sustainable changes to the socio-economic status and health conditions of the vulnerable coastal populations demands the need to bring together environment and social science as well as public health organisations to scan the horizon for emerging climate-associated disease threats and their impacts on the local population. In addition to research-based initiatives, it is also highly needed to improve methods to engage, communicate and disseminate research-based knowledge among the vulnerable communities, which in turn is expected to improve the awareness among the population, enhance preparedness and reduce health issues (water associated diseases) due to climate related changes in the ocean and coastal regions. ENRICH is an Indo-Norwegian transdisciplinary project funded by the Research Council of Norway that aims to engage researchers and the coastal population to effectively communicate and disseminate research-based knowledge on climate related changes in the ocean and its implications on human health vulnerability. ENRICH focusses on the state of Kerala, the most densely populated coastal zone in South India, (total length- 593 km; population - 36 million). ENRICH will use methodologies and digital tools such as conducting surveys, data collection and analysis, imparting awareness and capacity building of targeted population towards the development of a sustainable citizen science program, dissemination of knowledge and scientific information to local communities and stakeholders using digital tools like web/mobile applications, to improve Community Engagement. ENRICH will invest in the next generation via providing educational activities for school children, specific courses for school teachers, and training courses for college lecturers, undergraduate and graduate students. By addressing the vulnerable coastal population and investing in the next generation (school children, school and college lecturers, undergraduate and postgraduate students), ENRICH directly addresses the stated goal of UN Decade of Ocean for Sustainable Development (2021-2030) to provide a greater understanding of the importance of the ocean for all segments of the population. ENRICH activities will indirectly address 5 of the 17 UN Sustainable Development Goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.08.12 - POSTER - Advances and applications of sea surface temperature and the Group for High Resolution Sea Surface Temperature

Sea surface temperature (SST) is a fundamental physical variable for understanding, quantifying and predicting complex interactions between the ocean and the atmosphere. SST measurements have been performed operationally from satellites since the early 1980s and benefit a wide spectrum of applications, including ocean, weather, climate and seasonal monitoring/forecasting, military defense operations, validation of atmospheric models, sea turtle tracking, evaluation of coral bleaching, tourism, and commercial fisheries management. The international science and operational activities are coordinated within the Group for High Resolution Sea Surface Temperature (GHRSST) and the CEOS SST Virtual Constellation (CEOS SST-VC) in provision of daily global SST maps for operational systems, climate modeling, and scientific research. GHRSST promotes the development of new products and the application of satellites for monitoring SST by enabling SST data producers, users and scientists to collaborate within an agreed framework of best practices.

New satellites with a surface temperature observing capacity are currently being planned for launch and operations with ESA and EUMETSAT, such as CIMR, Sentinel-3C/D, and Sentinel-3 Next Generation Optical. In addition, new ultra-high-resolution missions are in planning such as TRISHNA and LSTM. These satellite missions continue contributions to the provision of high-quality SST observations and opens up opportunities for further applications. However, this will also require new developments and innovations within retrievals, validation etc. It is therefore important that the developments within high resolution SST products are presented and coordinated with the ongoing international SST activities. Research and development continue to tackle problems such as instrument calibration, algorithm development, diurnal variability, derivation of high-quality skin and depth temperature, relation with sea ice surface temperature (IST) in the Marginal ice zone, and in areas of specific interest such as the high latitudes and coastal areas.

This session is dedicated to the presentation of applications and advances within SST and IST observations from satellites, including the calibration and validation of existing L2, L3 and L4 SST products in GHRSST Data Specification (GDS) and preparation activities for future missions. We also invite submissions for investigations that look into the harmonization and combination of products from multi-mission satellites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SST and Combined SST/IST Products Overview: The Danish Meteorological Institute's Contribution to Copernicus Marine and Climate Change Services

Authors: Ioanna Karagali, Pia Englyst, Ida Lundtorp Olsen, Guisella Gacitúa, Alexander Hayward, Wiebke Kolbe, Jacob Høyer
Affiliations: DMI
The Copernicus Marine Service (CMS) and Copernicus Climate Change Service (C3S) are responsible for complementary reprocessing activities using satellite ocean observations. CMS encompasses reprocessing at global and regional scales of all satellite observations including all observations available at a given time (reprocessing of Essential Ocean Variables, EOVs). C3S fosters climate reprocessing, typically at global scale, with special focus on the most accurate observations and homogeneous time series (reprocessing of Essential Climate Variables, ECVs). The Danish Meteorological Institute (DMI) serves as a Production Unit (PU) for the Sea Surface Temperature (SST) and Sea Ice (SI) Thematic Assembly Centers (TAC) of CMS and the SST ECV of C3S. Within both frameworks, a suite of GHRSST-compliant L3S and L4 SST and combined SST/IST products for the Baltic and North Sea (CMS), Pan-Arctic (CMS) and Global Ocean (C3S) are produced. In the end of 2024, the new C3S SST/IST global L4 Climate Data Record (1982-2024) was released which provides a unique opportunity for assessment of temperature changes over the global ocean including regions with sea-ice cover. In early 2025, a reprocessed version of the Baltic Sea and North Sea Reanalysis product (1982-2024) will be released using the latest version of the ESA SST_cci L2P data as input. The aim of this presentation is to provide an overview of the existing and new products and their quality, a summary of the improvements implemented during the period 2022-2024 and those foreseen for the period 2025-2028.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluation of NOAA ACSPO SST Products against Independent Saildrone Data

Authors: Dr. Irina Gladkova, Dr. Olafur Jonasson, Dr. Veronica Lance, Dr. Yuri Kihai
Affiliations: City College Of New York, National Oceanic and Atmospheric Administration, Global Science and Technology Inc.
The information about satellite-derived sea surface temperature (SST), including its diurnal cycle, and derived ocean thermal fronts, is important for a wide variety of users, studies and applications. To address all those users’ needs, NOAA has developed an enterprise SST system, Advanced Clear Sky Processor for Ocean (ACSPO). Currently, ACSPO processes data of multiple high-resolution (~1 km) infrared satellite sensors flown in low Earth (LEO: JPSS VIIRS, EOS MODIS, Metop AVHRR FRAC) and geostationary (GEO: GOES-R ABI, Japan Himawari, and European Meteosat) orbits. ACSPO produces a wide range of L2P (swath) and 0.02° (~2 km) gridded Level 3 products, including uncollated (L3U), collated (L3C), and super-collated (L3S). All ACSPO data follow Group for High Resolution SST (GHRSST) guidance and standards and available to users in GHRSST Data Specification version 2 (GDS) NetCDF format via various services (NOAA STAR CoastWatch https://coastwatch.noaa.gov/cwn/index.html, OSPO Product and Data Access, EUMETSAT EumetCast) and archives (NOAA NCEI https://www.ncei.noaa.gov/, NASA PO.DAAC https://podaac.jpl.nasa.gov/). There are two features of the ACSPO products relevant to this study: 1) Resolving the SST diurnal cycle. All ACSPO GEO products are reported hourly, 24 Full Disks per day for each processed GEO satellite. The LEO satellites report SST at least twice a day, during the daytime and nighttime overpasses. There are two types of LEO platforms. Following the NOAA-EUMETSAT interagency agreement, NOAA launches its POES/JPSS LEO satellites in the afternoon “PM” orbit around ~1:30am/pm, whereas EUMETSAT operates its Metop satellites in a mid-morning “AM” orbit at ~9:30am/pm. In principle, a combination of the day and night (D/N) data in the AM/PM orbits would contain some information about the diurnal cycle. This consideration motivated the ACSPO L3S-LEO design, which first combines the LEO data into four separate L3S-LEO-AM/PM-D/N files, and only then aggregates them into one daily L3S-LEO-DY file. However, unlike the GEO processing, where one regression equation, trained against in situ data, is employed to all data of one satellite, the LEO regressions for day and night are different (daytime regression uses only longwave IR bands, and nighttime regression additionally employs the 3.7 µm band), and trained against in situ SSTs independently. 2) Reporting thermal fronts. Along with SST, two additional variables are included in all ACSPO files (LEO/GEO, L2P/L3U/L3C/L3S): binary mask indicating the presence of a thermal front, and its intensity in K/km. ACSPO products undergo extensive validation with respect to in situ SSTs measurements. The conventional in situ data cover the ocean near-globally and span various time scales, from over a century for ships to only several decades for drifting and moored buoys (~1980 – on) and Argo floats (late 1990s – on). These observations have been incorporated in the NOAA in situ SST Quality Monitor system (iQuam; https://www.star.nesdis.noaa.gov/socd/sst/iquam/) for the satellite era from 1981 – onward. The iQuam ingests data from multiple NOAA, national and international data centers and archives, performs a consistent quality control, and serves to NOAA, national and international users online, in near-real time. ACSPO Team uses iQuam for variety of purposes, including validation of ACSPO SST products, which reported in another NOAA online system, SST Quality Monitor (SQUAM; https://www.star.nesdis.noaa.gov/socd/sst/squam/). Note that no specific validation is provisioned in SQUAM for the diurnal cycle or fronts. As a result, their quantitative assessment remains very limited to some off-line ad-hoc analyses and initial semi-qualitative estimates. Frontal locations are visualized in the NOAA ACSPO Regional Monitor for SST (ARMS; https://www.star.nesdis.noaa.gov/socd/sst/arms/) online system to facilitate routine evaluation of their performance, in a qualitatively way. More recently, in the mid-2010s, a new type of in situ data from automated unmanned vehicles, Saildrones, has been introduced (Gentemann et al, 2020). Saildrones sample the ocean surface every minute and provide multiple measurements of the ocean surface, including the skin (from IR instrument) and the bulk SST measurements (from the CTD sensor at 0.6 m below the sea surface level). Initial analysis of the data suggest that Saildrones’ SSTs are of sufficient accuracy to allow validation of satellite SSTs and gradients, and although the coverage remain limited to some selected regions of the ocean, independent evaluation of satellite SSTs is possible (e.g., Koutantou et al., 2023). At the time of writing, we have compared two ACSPO gridded SST products (L3C GOES-18 and L3S LEO) with Saildrone data from the Tropical Pacific Observing System (TPOS) 2023 mission, using data from https://www.pmel.noaa.gov/ocs/saildrone/data-access, with a focus on the diurnal cycle and thermal fronts’ information. The Gridded (0.02° ~2 km) Level 3 products were selected for the initial analysis because they provide convenient equilateral Latitude/Longitude grid for matching the Saildrone’s geo-locations with satellite observations. The selected gridded GEO collated (GOES-18 L3C) and LEO super-collated (L3S-LEO) products have larger clear-sky coverage and less noise than individual satellite observations, and have potential to resolve the diurnal cycle and ocean features movement. The GEO L3C has an hourly temporal resolution, and the four L3S-LEO files per day capture evolution of SSTs retrieved from LEO platforms at approximately 9:30am/pm and 1:30am/pm. The 1-minute temporal resolution of Saildrones allows nearly instantaneous matching with the satellite time-record. Our current analyses suggest that the diurnal signal in GOES-18 and Saildrone SSTs are largely consistent in shape and amplitude. The four diurnal points in the L3S-LEO-AM/PM-D/N show less agreement with Saildrone SST. This is expected, as the D/N regression equations are different, and trained against in situ data independently. More analyses are needed to better quantify and understand the ACSPO/Saildrone consistency. Thermal fronts occur in only a small fraction of data, and finding statistically representative match-up dataset for comparison is proving challenging to date. The traditional drifting and moored buoys, which are routinely ingested by iQuam from multiple NOAA, national and international data centers and archives, will also be included in the analyses. Although traditional buoys and drifters measure SST at a depth below the surface, rather than skin or sub-skin temperature, they have significantly wider spatial coverage. In combination with Saildrone’s capability of SST measurements at different levels, conventional in situ SSTs provide a good foundation for statistical analyses and additional consistency checks. More details and further results will be presented at the Symposium.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Preliminary Assessment of the Copernicus Imaging Microwave Radiometer (CIMR) Impact on Mediterranean Sea Surface Temperature L4 Analyses

Authors: Mr. Mattia Sabatini, Andrea Pisano, Claudia Fanelli, Bruno Buongiorno Nardelli, Dr. Gian Luigi Liberti, Dr. Rosalia Santoleri, Daniele Ciani
Affiliations: CNR-ISMAR, CNR-ISMAR, University of Naples "Parthenope"
Regular and continuous satellite-based mapping of sea surface temperature (SST) is essential for developing global and regional SST datasets, which support both near real-time operational applications and the creation of long-term climate time series. SST is a crucial variable for studying ocean dynamics, ocean-atmosphere interactions, and climate variability, serving as a key indicator for tracking global warming and assessing the health of marine ecosystems. Currently, spaceborne infrared radiometers provide highly accurate, high-resolution SST measurements, from 1 km to 100 m. However, they are limited by their inability to penetrate cloud cover. In contrast, microwave radiometers offer near all-weather observation capabilities but typically operate at lower spatial resolutions due to instrumental constraints. The upcoming Copernicus Imaging Microwave Radiometer (CIMR) mission marks a significant advancement in microwave SST remote sensing, promising a spatial resolution of up to 15 km - representing a substantial improvement over existing passive microwave systems. Our preliminary study explores the potential impact of the CIMR mission on Mediterranean Sea SST products, which are currently produced and distributed by the European Copernicus Marine Service using only infrared SST data. Through an Observing System Simulation Experiment (OSSE), we evaluate the effect of integrating synthetic CIMR observations into the existing Copernicus Mediterranean SST analysis system, comparing results with and without CIMR data. The findings reveal that incorporating CIMR observations reduces the uncertainty of the Mediterranean SST product, as measured by the root mean square difference (RMSD), by 26%. Additionally, CIMR improves the reconstruction of level-4 fields and demonstrates that its enhanced spatial resolution enables effective microwave SST retrieval even in semi-enclosed basins like the Mediterranean Sea. Ongoing studies leveraging oceanographic satellite data focus on integrating AMSR-2 passive microwave (PMW) observations into the Mediterranean Sea L4 processing, serving as an additional preparatory study for incorporating CIMR PMW SSTs in the future. Although the AMSR-2 footprint and the resolution of the operational observations are not ideal for applications in semi-enclosed basins and coastal areas, this approach can be valuable offshore, especially under cloud cover conditions typically associated with large-scale low-pressure disturbances.
LPS Website link: Preliminary Assessment of the Copernicus Imaging Microwave Radiometer (CIMR) Impact on Mediterranean Sea Surface Temperature L4 Analyses&location=X5+-+Poster+Area" class="text-info" target="_blank">Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Exploring new cloud detection algorithms for remote sensing SST observations using a data-driven approach and the multifractal theory of turbulence.

Authors: Aina Garcia-Espriu, Dr. Cristina González-Haro, Dr. Jordi Isern-Fontanet, Dr. Antonio Turiel
Affiliations: Institute Of Marine Sciences (ICM, CSIC)
Cloud detection is crucial in Sea Surface Temperature (SST) measurements from satellite data. If clouds are present but not properly detected and masked, the measurements can contain a much colder cloud top temperature rather than the actual sea surface temperature, affecting statistics and larger-scale analysis. The presence of clouds can cause a cold bias in SST measurements since cloud tops are generally much colder than the ocean surface. On the contrary, if the cloud-detecting algorithm is based on local differences of SST masking high cold gradients, SST measurements in upwelling regions may be masked out. This can lead to systematic underestimation of SST values or a decrease in coverage in affected regions. Current cloud mask algorithms still present some limitations in detecting pixels where only partial cloud coverage is present. This leads to temperature measurement being an incorrect mixture of cloud top and sea surface temperatures. This creates unreliable data points that do not accurately represent either the cloud or sea temperature. Inaccurate SST measurements can mask or artificially create temperature trends, potentially leading to incorrect conclusions about climate change patterns or ocean warming rates. It also has an impact on ocean dynamics, as incorrect cloud masking can lead to misinterpretation of circulation patterns. Finally, it also has a high impact on weather forecasting, as incorrect values due to cloud contamination can degrade the forecast accuracy for coastal and marine weather predictions. Here, we present a new machine-learning approach to improve cloud detection algorithms for SST remote sensing observations. This approach is based on the Microcanonical Multifractal Formalism of turbulence (Turiel et al. 2008). The characterization of remote sensing of ocean variables by means of the Multifractal Formalism is known since 20 years ago (Lovejoy et al. 2001; Turiel et al. 2005;). Owing to ocean turbulence, using singularity analysis information about marine hydrography can be retrieved starting from sea surface temperature or other remote sensing variables (Turiel et al. 2005). This has been mainly used to develop data fusion techniques, and also for the characterization of ocean currents (Umbert et al. 2014, Olmedo et al. 2016). In recent years, we have gained a deeper understanding on the connection between intermittency and dissipation in ocean turbulence (Isern-Fontanet and Turiel 2021). The characteristics of the energy cascade determine the functional dependency of the singularity spectrum and, thus, the geometrical properties of the flow (fractal dimensions). These geometrical properties of the flow should be kept whether we are analyzing brightness temperature or SST observations. However, we observe a different dynamic response between different kinds of clouds, oceans, and land. By combining the TB and the SST along with their associated singularity exponents, we can train machine learning algorithms to obtain a more accurate cloud mask. Preliminary results using a clustering approach with Sentinel-3 L2P data show a more accurate segmentation between the clouds and ocean pixels. Isern-Fontanet, J., & Turiel, A. (2021). On the connection between intermittency and dissipation in ocean turbulence: A multifractal approach. Journal of Physical Oceanography, 51(8), 2639–2653. https://doi.org/10.1175/JPO-D-20-0256.1 Lovejoy, S., W. Currie, Y. Tessier, M. Claereboudt, E. Bourget, J. Roff, and E. Schertzer, 2001: Universal multifractals and ocean patchiness: phytoplankton, physical fields and coastal heterogeneity. J. Plankt. Res., 23 (2), 117–141. Olmedo, E. et. al 2016. Improving time and space resolution of SMOS salinity maps using multifractal fusion. Remote Sensing of Environment 180, 246-263. Turiel, A., J. Isern-Fontanet, E. García-Ladona, and J. Font, 2005: A multifractal method for the instantaneous evaluation of the stream-function in geophysical flows. Pys. Rev. Lett., 95, doi:https://doi.org/10.1103/PhysRevLett. 95.104502. Turiel, A., Yahia, H., Pérez-Vicente, C. J. (2008). Microcanonical multifractal formalism—A geometrical approach to multifractal systems: Part I. Singularity analysis. Journal of Physics A: Mathematical and Theoretical, 41(1), 015501 Umbert, M. et. al. 2014. Multifractal synergy among ocean scalars: applications to the blending of remote sensing data”, Remote Sensing of Environment 146, 188–200.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A 45-Year Sea Surface Temperature Climate Data Record From the ESA Climate Change Initiative

Authors: Owen Embury, Christopher Merchant, Simon Good, Jacob Høyer, Nick Rayner, Thomas Block, PhD Sarah Connors
Affiliations: University Of Reading, National Centre for Earth Observation, Met Office, Danish Meteorological institute, Brockmann consult GmbH, European Space Agency
Understanding the state of the climate requires long-term, stable observational records of Essential Climate Variables (ECVs) such as sea surface temperature (SST). ESA’s Climate Change Initiative (CCI) was set up to exploit the potential of satellite data to produce climate data records (CDRs). The initiative now includes projects for 27 different ECVs – including SST, which released the third major version of the SST CCI CDR last year. Complementary to the CDR is an Interim CDR (ICDR) to proving an ongoing extension in time of the SST CCI CDR at short delay (approx. 2-3 weeks behind present). The ICDR was funded by the Copernicus Climate Change Service (C3S) for 2022 and is now funded by the UK Earth Observation Climate Information Service (EOCIS) and UK Marine and Climate Advisory Service (UKMCAS) for 2023 onwards. Version 3 of the SST CCI CDR now covers 45-years, from 1980 to present, using data from twenty infrared and two microwave radiometers. These include reference observations from the dual-view Along Track Scanning Radiometer (ATSR) and Sea and Land Surface Temperature Radiometer (SLSTR) instruments, meteorological observations from the Advanced Very High Resolution Radiometer (AVHRR), and observations from the Advanced Microwave Scanning Radiometer (AMSR)-E and AMSR2 which are less affected by clouds. The dataset includes both single-sensor products (native resolution Level 2, and averaged on a global 0.05° gridded Level 3) plus a merged, gap-free, Level 4 SST analysis generated using the Met Office Operational Sea Surface Temperature and Ice Analysis (OSTIA) system. All products follow the GHRSST Data Specification (GDS) and CCI Data Standards. The SSTs are harmonised at the sensor level to ensure that multiple satellites can be used together as a single CDR. Changes in the satellite overpass time (due to different and drifting orbits) are accounted for by providing both the direct satellite observation and an estimate adjusted to a standardised time-and-depth equivalent to the daily average SST at 20cm. This avoids aliasing the diurnal cycle into the long-term record and allows comparison with the historical in situ record.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Bias Correction Methods for L4 Satellite Sea Surface Temperature Analyses

Authors: Andrew Harris, Dr Seonju Lee, Dr Tom Smith
Affiliations: University Of Maryland, NOAA/NESDIS/STAR
Sea surface temperature (SST) is one of the most important geophysical parameters in the Earth system. It is listed as an Essential Climate Variable by the World Meteorological Organization. The majority of incoming energy is stored as heat in the upper layers of the ocean. This heat can be transported via ocean currents and affects numerous oceanic and atmospheric processes. Knowledge of the ever changing spatial pattern of SST is valuable for informing many geophysical and environmental processes. The only realistic way to obtain this information on a global basis is to utilize satellite observations. Spaceborne thermal infrared sensors collect billions of observations per day, reducing to hundreds of millions after screening for cloud. Given the challenge presented by these vast data volumes, along gaps due to cloud and the nature of swath-oriented data, there is considerable benefit to combining observations from different sensors in a single daily gap-free analysis. Such Level-4 analysis products are widely utilized in numerous applications by users who implicitly rely on the fidelity of the methodology. The two most important aspects of constructing an accurate Level-4 SST Analysis are the quality control of the input data and the subsequent bias correction of the observations that pass the screening process. Here, we explore the nature of the bias correction issue, including the underlying causes, and examine ways of addressing the problem. Although the basic physical causes can be reasonably elucidated, their precise effect may be harder to estimate, especially in SST products that utilize empirical regression. In such cases, the actual effects are embedded in the retrieval coefficients. We show that machine learning, in combination with suitable ancillary data, is a viable approach to unpicking the various physical dependencies and providing reliable spatiotemporal estimates of the bias correction. Another crucial aspect of such work is the impact of training on the stability of the result. This is particularly important when constructing a long-term L4 record of SST because the in situ record has varied considerably in terms of quality and particularly geographic coverage. We employ historical data masks to demonstrate the stability of machine learning bias corrections. We go on to explore the usefulness of a combined physics (primarily radiative transfer) and machine learning approach in order to improve the spatiotemporal stability of the estimated bias correction. Finally, we consider exploitation of aerosol-robust dual-view SST data as a bias correction reference. Although such data are not available for the first ~decade of satellite SST observations, they can play an important part in both training of machine learning and as a baseline for providing residual corrections, not least due to the regular global sampling they afford. This will be particularly important for the times influenced by major volcanic eruptions, since it is feasible to develop and demonstrate methods on the Mt Pinatubo period that can be employed in the earlier period affected by the El Chichon eruption.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Comparing Super-Resolution Techniques for High-Resolution SST Reconstruction in the Tropic Oceans

Authors: Yueli Chen, Oleg Emelianov, Melanie Maria Maier, Simon Sing Hee Tong, Dr. Yawei Wang, Prof. Dr. Xiao Xiang Zhu
Affiliations: Technical University of Munich, Guangzhou Institute of Geography, Guangdong Academy of Sciences
Abnormal variations in sea surface temperature (SST) pose significant threats to coastal ecosystems, fisheries, and weather stability, particularly in tropical regions. These changes exacerbate rapid erosion, increase the frequency of extreme weather events, and accelerate ecosystem degradation. Accurate monitoring of SST is essential for understanding and mitigating phenomena like coral bleaching, marine heatwaves, and their cascading impacts on marine biodiversity and human livelihoods. However, existing SST datasets often suffer from a trade-off between spatial resolution and temporal coverage, limiting their ability to capture fine-scale dynamics essential for both scientific understanding and practical applications. To address these challenges, we explore the application of deep learning-based super-resolution (SR) techniques to reconstruct high-resolution, full-coverage SST data from multi-source remote sensing inputs. This study compares several state-of-the-art SR methods, including SRCNN, U-Net, SRGAN, and Diffusion models, intending to identify the most effective approach for generating high-spatiotemporal-resolution SST datasets. Using Himawari-8 data as the high-resolution (HR) reference with gaps and OSTIA data as the low-resolution (LR) full-coverage input, the models were trained on data from the tropical oceans around Australia. The transferability of the trained models is further evaluated by applying them to datasets from high-latitude cold regions. Quantitative evaluations of model performance are conducted using standard metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), to assess each method’s ability to reconstruct fine-scale SST patterns and preserve critical features. Additionally, we collected in situ SST measurements in the training region and one of the transfer target areas, enabling real-world validation of the reconstructed datasets. This ensures the physical validity and reliability of the super-resolved SST products for practical applications. Furthermore, we investigate the potential benefits of incorporating auxiliary conditional data into simpler architectures, such as SRCNN, to overcome limitations inherent to simpler model designs. Comparisons between methods allow us to understand the trade-offs in complexity, accuracy, and transferability, providing actionable insights into the relative merits of each approach. This work contributes to the development of advanced SST reconstruction techniques, offering a pathway to generate accurate, high-resolution SST datasets for continuous monitoring. By enabling precise SST observation, these advancements promote sustainable management of marine resources and ecosystems while enhancing our understanding of SST dynamics in both tropical and high-latitude regions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: TRUSTED: In situ FRM Data for SST & IST

Authors: Marc Lucas, Dr Marc Lemenn, Gorm Dybkjær, Anne OCarroll
Affiliations: CLS, SHOM, DMI, EUMETSAT
The improvement in the precision of Sea Surface Temperature and Ice Surface Temperature retrievals by Satellite born instruments over the last decades has brought about the need for improved in situ reference data for calibration and validation purposes. This in situ data needs to be properly characterized and this means working out precisely the uncertainty budget and ensuring that any data collected is fully traceable. Over the past 6 years the TRUSTED project, funded by Copernicus has been doing just that, deploying over 350 drifters with high resolution Sea Surface Temperature sensors and setting up calibration and metadata processes to ensure full traceability, including a full uncertainty diagram in order to achieve Fiducial Reference Measurement status. More recently, the TRUSTED consortium has started working on a new instrument to retrieve high quality and fully traceable IST data. In this paper, we will present the achievements of the TRUSTED project as well as the latest development for IST Fiducial Reference data.
Add to Google Calendar

Tuesday 24 June 17:45 - 18:30 (ESA Agora)

Session: E.01.04 Co-creating EO-driven solutions with stakeholders: the ESA Green Transition Information Factories (GTIF)

The ESA GTIF (Green Transition Information Factories) initiative is addressing the information needs in the context of the Green Transition and develops analytical capabilities and decision support tools to address the operational needs of users and stakeholders and support their processes.

This Agora session will dive into the GTIF co-creation approach, in which Green Transition users and stakeholders are engaged to bring forward information needs and requirements from their operational working context. These requirements are then analysed by the contributing GTIF industry teams and ESA experts to develop initial versions of dedicated capabilities which combine value-adding algorithms, user interface embeddings with cloud computational scaling and quality assurance aspects. Subsequently, these capabilities are further evolved to correspond to specific stakeholder requirements. Operationalisation of such capabilities and uptake in user and stakeholder operational processes is the ultimate goal of this co-creation process.

This Agora will reflect on experiences, lessons learned and success stories of stakeholder engagement and co-creation in the different GTIF projects. It will feature speakers from across the different GTIF projects and currently covered countries (i.e., Baltics, UK, Ireland, France, North Atlantic, Danube region).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: D.03.04 - POSTER - Innovative technologies, tools and strategies for scientific visualisation and outreach

In recent years there has been an advancement in the field of communicating science-based information and facts to the citizens through open access and open source web based solutions and mobile applications. In earth observation, these solutions use innovative ways of presenting eo-based indicators and data, often coupled with storytelling elements to increase accessibility and outreach. Additionally, such innovations, coupled with data access and computation on cloud-based EO platforms, are very effective tools for scientific data dissemination as well as for education in EO and Earth Science. In this session we welcome contributions from innovative web-based solutions, dashboards, advanced visualisation tools and other new and innovative technologies and use cases for scientific communication, dissemination and education. In particular, we seek to explore how such solutions help to increase the impact of science, create and grow communities and stimulate adoption of EO. We also look towards the future, exploring trends and opportunities to connect with non-EO communities and adopt new technologies (e.g. immersive tools, AR, VR, gaming engines, etc.). The session will be an opportunity to exchange experiences and lessons, and explore opportunities to further collaborate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A GAMIFIED MOBILE APPLICATION FOR IMPROVING PUBLIC ENGAGEMENT WITH RECREATIONAL WATER QUALITY THROUGH AR/VR SIMULATIONS

Authors: Rūta Tiškuvienė, Dr. Diana Vaičiūtė, Dr. Marija Kataržytė, Dr. Martynas Bučas
Affiliations: Klaipeda University
Water quality in recreational water bodies, such as lakes, rivers, and coastal ecosystems, is a growing concern due to pollution, climate change, and urbanization. Despite the advancements in water quality monitoring technologies, public awareness and engagement still need to improve, particularly among non-professional users who may find ecological data complex and difficult to interpret. To address this, our research focuses on creating a mobile application that integrates gamification and augmented/virtual reality (AR/VR) simulations to visualize water quality dynamics. The tool aims to connect environmental science and public awareness by making scientific data accessible and more engaging. The primary aim of this research is to develop an interactive digital tool that uses remote sensing and in situ water quality data with AR/VR simulations and evaluate the tool for its effectiveness in improving user engagement, understanding of water quality dynamics, and behavioral changes regarding water resource protection. This tool will offer user-friendly engaging situational simulations where users can explore how their actions influence water quality, learning through interactive scenarios based on scientific principles of ecology. At the core of the application are situational simulations designed to immerse users in real-world environmental challenges. These simulations allow users to interact with virtual environments that mirror actual water bodies, such as lakes, rivers, or coastal ecosystems. In each scenario, users are presented with environmental dilemmas, such as pollution from agricultural runoff, overfishing, or climate-driven algae blooms. Through a series of interactive decisions, users must manage these challenges while maintaining water quality. The simulation provides real-time feedback on the ecological consequences of the user's decisions, leveraging both remote sensing data (e.g., satellite data on chlorophyll-a, temperature, and water quality metrics like E. coli) and in situ data to create dynamic, realistic outcomes. For example, when users choose to implement certain pollution prevention measures, they will observe changes in water quality parameters, such as a reduction in harmful algal blooms or improved oxygen levels, demonstrating the cause-and-effect relationship between human actions and aquatic health. Gamification elements, such as challenges, points, and rewards, further enhance the simulation experience. Users are rewarded for making sustainable decisions, such as reducing nutrient runoff or implementing eco-friendly farming practices, reinforcing positive behavior. The more informed the user becomes about water quality, the higher they score in the game, providing a tangible incentive for continued engagement. Additionally, the situational simulations are adaptable to different geographic locations, allowing users from various regions to experience local water quality challenges and solutions. The integration of AR/VR technology adds a novel layer of immersion, allowing users to visually experience the impact of their actions in a 3D space. For instance, users can "dive" into a virtual water body to observe the effects of their decisions on aquatic flora and fauna or walk through a watershed to see how land-use practices affect water flow and quality. The ability to switch between different perspectives—both above and below the waterline—gives users a comprehensive understanding of water ecosystems. Through AR/VR, users can interact directly with water quality data, making it less abstract and more relatable. For instance, they can observe how excess nutrients from fertilizers contribute to algal blooms, or how urban runoff affects microbial pollution levels. The tool not only engages users in decision-making processes but also teaches the underlying scientific concepts driving these water quality issues, turning data into a visually engaging narrative. This research hypothesizes that gamification, combined with AR/VR simulations, will significantly enhance public citizen engagement and understanding of water quality issues compared to semi-professional and professional users. Public citizens often lack the scientific background to process complex environmental data, making them more likely to benefit from gamified, interactive learning environments. Although the research is in its early stages, we anticipate that the app will lead to significant improvements in user engagement, particularly among public citizens. The interactive, visually rich format is expected to increase the preservation of water quality information and promote a better understanding of the actions necessary to protect water resources. Public citizens are expected to show a greater positive change in perception and knowledge compared to semi-professional and professional users. This gamified mobile application has the potential to become a scalable solution for raising public awareness about water quality. By employing AR/VR technologies and gamification, the tool will bridge the gap between complex ecological data and public understanding, promoting greater community involvement in environmental sustainability. Future development could extend its use to other environmental challenges, such as air quality or climate change, offering a versatile platform for engaging the public in sustainability efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Water Health Indicator System (WHIS): A Global Water Quality Monitoring Web App through Advanced Earth Observation Technologies

{tag_str}

Authors: Daniel Wiesmann, Jonas Sølvsteen, Olaf Veerman, Emmanuel Mathot, Daniel Da Silva, Ricardo Mestre, Pr Vanda Brotas, PhD Ana Brito, Giulia Sent, João Pádua, Gabriel Silva
Affiliations: Development Seed, MARE Centre, Labelec
The Water Health Indicator System (WHIS) serves as a robust platform for monitoring water quality, showcasing the capabilities of earth observation technologies and environmental data analysis that are accessible to everyone. Developed through a collaboration between Development Seed, MARE (Marine and Environmental Sciences Centre), and LABELEC, WHIS addresses common challenges in existing water monitoring by offering a scalable solution designed for assessing aquatic ecosystem health. At the heart of WHIS is a powerful integration of geospatial cloud technologies, built on the eoAPI (Earth Observation API). This allows users to leverage tools such as the Spatio-Temporal Asset Catalog (STAC) and Cloud-Optimized GeoTIFF (COG) for dynamic data access. A platform like this enables seamless integration of remote sensing datasets, particularly from the Sentinel-2 mission, ensuring precision and adaptability in water quality assessment. The application utilizes specialized atmospheric processing algorithms, such as Acolite, to analyze water quality, tackling issues related to atmospheric interference and spectral interpretation. By focusing on key indicators like chlorophyll content and turbidity, WHIS allows for localized calibration and insights into ecosystem health, demonstrating that these advancements in monitoring are achievable with the right tools. WHIS is tailored for inland and coastal water bodies. Its cloud-optimized infrastructure provides an interactive interface where users can select specific water bodies, explore geographical data, conduct statistical analyses, and inspect pixel-level information—all of which can be replicated by other users with eoAPI. Furthermore, the innovative product-services business model links technological capabilities with environmental monitoring needs, showing how any organization can leverage these advancements. As global challenges related to water availability and quality persist, the Water Health Indicator System stands as a testament to what can be achieved with eoAPI technology. If we can harness its potential, so can you, making it an essential tool for environmental monitoring and ecosystem management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite data for the UN Ocean Decade: Innovative Approaches to Story-telling for Diverse Marine Stakeholders

Authors: Dr Hayley Evers-King, Dr Benjamin Loveday, Danaële Puechmaille, Michael Schick, Miruna Stoicescu, Sally Wannop
Affiliations: EUMETSAT, Innoflair
EUMETSAT, along with partners from the European marine data provider ecosystem, produces regular case studies showing how our data can be used to support marine operations, science and applications. As an endorsed activity, a series of these cases have been developed to show how marine Earth observation data can be used to help address the 10 challenges facing the ocean as defined by the United Nations Decade of Ocean Science for Sustainable Development (UNOD) programme. Multiple case studies are being produced for each challenge, showcasing specific topics, appropriate selection of relevant products, and varying approaches to data synergy, analysis, and visualisation techniques. A number of formats have been developed to facilitate the promotion and use of these case studies by different stakeholder communities. The traditional approach uses web-based articles, adopting a format that is recognisable to the EUMETSAT user base, but introducing new, and perhaps unfamiliar, ocean data streams, as well as the UNOD challenges themselves. These articles are accompanied by Jupyter Notebooks, which allow and encourage the reader to recreate, and expand upon, some of the analyses presented. These notebooks offer flexible deployment and are designed to run locally and on remote cloud services, including Binder, DestinE and the Copernicus WEkEO DIAS, where they are featured in the notebook catalogue and hosted on the JupyterLab. The notebooks, which are open source and freely shareable, are made available as part of EUMETSATs regular training courses, and form a key part of the current EUMETSAT special webinar series dedicated to the UNOD. New workflows are in development to exploit DestinE and, in particular, the DEA Interactive Storytelling service, bringing together cloud-based data provision, data visualisation and contextual narratives. These tools help to broaden the scope and appeal of the stories, opening discussion of UNOD challenges to new audiences in code-free contexts. Topics covered in the case studies so far include deoxygenation events associated with risks to fisheries and aquaculture, the role of altimetry in storm monitoring and global mean sea level quantification, and the assessment of marine heatwaves from sea surface temperature. Example case studies will be presented, as well as lessons learned from the use of different approaches to storytelling.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Timeline Viewer: a web application for intuitive interactive visualisation of time-based data

Authors: Eelco Doornbos, Mark Ter Linden, Eelco Verduijn, Kasper Van Dam, Jos De Kloe, Gerd-Jan Van Zadelhoff
Affiliations: KNMI, Royal Netherlands Meteorological Institute
The Timeline Viewer is an interactive web application for visualisation of time-dependent data, which adopts an easy-to-use user interface for zooming and panning using the mouse, trackpad or touch interfaces (phones and tablets). Users will be familiar with this mode of interaction from its use in popular web applications like Google Maps applied to geospatial dimensions. However, in the Timeline Viewer, these interactions are applied to the time dimension, allowing users to quickly zoom in to details at time scales of hours, minutes or seconds, then zoom out for context over days, months or years. By applying a combination of panning and zooming, users can move the view between different dates, times and events of interest, with similar ease as moving between different countries, cities and streets in Google Maps. The Timeline Viewer was originally designed for real-time monitoring and interactive visual exploration of space weather time series data, in which processes on the Sun, in the heliosphere, the Earth's magnetosphere and upper atmosphere are closely connected in time. Spatial scales in this domain are often so large that they become of secondary importance. Users find great value in being able to simultaneously display information from multiple related data sources on a single time-axis, and to have a replication of common plot types from scientific publications on space weather case studies available in a flexible interactive interface, both for all available historical data as well as for current events. The tool was extended as part of the activities of the ESA Swarm Data Innovation and Science Cluster to improve the utilisation of Swarm observations of space-weather-related variability in Earth's thermosphere-ionosphere and magnetic field. As part of this project, a public version of the application has been deployed at https://spaceweather.knmi.nl/viewer/. Besides the ability to display time series data as bar, line, and ridgeline charts, the tool gained the ability to browse through sequences of quick-look images to create time lapses as the user moves around the interface by panning, as well as to use heat-map-type images that resize as the user zooms in and out on the timeline. This allows for comparison of Swarm observations with remote sensing images of ionospheric emissions from NASA's GOLD satellite, and with remote sensing of aurora in the polar regions from JPSS VIIRS-DNB and DMSP SSUSI instruments. A capability to view 3D satellite orbit geometry was also added for the Swarm project. These facilities allow for the geospatial dimensions in the data to be reintroduced in the visualisations. Because of its flexibility, the application has also quickly proved its worth for continuous model validation, education and training, and data quality monitoring. For this latter purpose, it was adopted and separately deployed during the commissioning phase of the EarthCARE mission, demonstrating a first use case outside of the space weather domain. For EathCARE it was especially used for the ATLID lidar instrument, to display both instrument and calibration data, and this has proven to be very useful to detect features and anomalies in the early phase of the mission. For example strong noise spikes due to energetic particles and hot pixels on the detector could be identified early in the mission, which allowed us to adapt the L1 processing software to handle this, which should significantly improve the data quality. The tool has been built around a web server and database back-end created in Python and a front-end that uses the Svelte framework for reactive web applications. The Heliophysics API (HAPI) is used for the delivery of data between back-end and front-end. The adoption of this standard enables the tool to be used with other back-ends, such as the INTERMAGNET global network of magnetic observatories, NASA's Space Physics Data Facility (SPDF) and ESA's Cluster mission data archive. The HAPI standard also proved to be easy to use and beneficial outside of the domain of heliophysics, as demonstrated by its adoption in the EarthCARE project.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Alplakes: Monitoring and forecasting European alpine lakes

Authors: James Runnalls
Affiliations: Eawag
Alplakes (www.alplakes.eawag.ch) is an ESA funded research project that provides predictions of the main physical and biological parameters of more than 80 lakes throughout the European Alpine region. We integrate models and remote sensing products developed by the research community to provide up-to-date and accurate information. These products are made available to the public in a visualisation focused user-friendly web application, meaning hydrodynamic modelling and remote sensing data are no longer confined to domain experts. This accelerates and empowers evidence-based water management across a broad range of stakeholders. Alplakes utilizes the open-source Python toolbox Sencast (https://github.com/eawag-surface-waters-research/sencast) to access Sentinel-2 and Sentinel-3 data from DIAS providers, and to perform the computation of essential water quality parameters such as chlorophyll concentration, turbidity, and water clarity. From these detailed water quality maps, lake-specific statistics are generated, providing users with a comprehensive view of changes in lake conditions over time. This approach not only enhances accessibility to up-to-date water quality data but also supports long-term monitoring and analysis, empowering lake managers and researchers to make informed decisions for sustainable lake management. Earth observation data is visualised in tandem with other data sources, such as hydrodynamic models, to provide dynamic context for the static snapshots available from satellite imagery. The platform's built-in particle tracking functionality enables users to predict the development of lake events, such as algal blooms identified through satellite imagery, allowing lake managers to take proactive measures to protect water quality at drinking water intakes. All models, data processing pipelines, and products that power the Alplakes platform are fully open source, with results made accessible as open data for ease of use. This transparency allows other scientists not only to access and validate our methodologies but also to directly integrate Alplakes models and products into their own research. By providing unrestricted access to both the tools and data, Alplakes fosters collaborative opportunities across disciplines, supporting reproducibility, enabling new research insights, and expanding the impact of Earth observation science on freshwater ecosystem studies. This open framework encourages a community-driven approach to advancing environmental research, developing innovative monitoring applications, and managing lake health. To enhance Alplakes as a comprehensive digital twin of alpine lake ecosystems, we are working to expand the platform geographically and broaden the range of available products. By incorporating additional lakes from diverse regions and integrating data from various satellite missions, we aim to provide a more complete picture of alpine lakes. This expansion will not only increase the platform's utility for scientists and lake managers but also support Alpine-based collaboration and improve the predictive capabilities of the platform, allowing for more effective, data-driven decision-making in lake conservation and management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards cloud-based EO platform in support of indicator development for society and environment

Authors: Alexandra Bojor, Tyna Dolezalova, Fabien Castel, Stefano Natali, Maximilien Houël, Leon Stärker, Lubomír Doležal, Daniel Santillan, Camille Lainé, Adrien Gavazzi
Affiliations: Sistema Gmbh, EOX IT Services GmbH, Murmuration SAS
Earth Observation (EO) data provide a complete description of the Earth system every few days, enabling near real time applications from local to global scale. There is a need for dedicated tools able to contemporary manage the large variety and the large volume of data, being as much as possible application-domain agnostic. Machine / Deep Learning (ML/DL) as well as Artificial Intelligence (AI) approaches and cloud-based platform technologies are getting more and more room in the EO domain. This moves the paradigm of data exploitation from physically based to geo-statistically based applications, facilitating efficient access, computation, handling various data sources and helping solving complex societal challenges. The main goal of the “Indicator Development For Economy And Society (IDEAS)” project, is to explore the value of cross cutting technologies (such as Overpass API, Citizen Contributed Data and Gamification) and to develop innovative and interdisciplinary indicators from EO and geospatial data. The new indicators shall provide new perspectives and relevant information on pressing societal challenges, by taking advantage of cloud-based EO platform capabilities, accessible data, computational resources, and analytical capabilities. Some societal challenges that require innovative solutions where EO and geospatial technologies can play a role include global climate crisis as well as ambitions related to green transformation. More recently the COVID19 pandemic has posed numerous challenges to societies globally. Also, energy shortages and geopolitical repercussions of the Russian invasion in Ukraine provide numerous addition challenges for which novel perspectives and solutions are required. Within this work five indicators were developed at European level and integrated in different ESA-supported environments, such as RACE ,GTIF and trilateral dashboard . The five developed indicators correspond to one of the following societal challenges and for each of them cross-cutting technologies were implemented. 1. Indicator #1: Pollution and urban heat islands allows the creation of population health vulnerability maps. The indicator couples the information observed by satellite remote sensing for air quality and land surface temperature and combines it with population characteristics (age-gender-location), in addition to the locations of medical infrastructures to provide a first level of analysis to decision makers. This indicator makes use of gamification technology and correspond to the information needs related to “Green Transition and the European Green Deal” and “COVID19 pandemic and economic recovery”. 2. Indicator #2: Wildlife and biodiversity aims at developing a powerful, impacting, visual indicator to help building the general public’s knowledge and raise awareness around the current status of biodiversity and the importance of conservation efforts. It makes use of Citizen Contributed Data and gamification (Minesweeper) technology by using the crowdsourced fauna and flora observation data available from the GBIF (Global Biodiversity Information Facility), combined with EO data on land use and vegetation health. It corresponds to the information needs related to “Green Transition and the European Green Deal” and “Climate Crisis & adaptation”. 3. Indicator #3: Food security. The scope is to monitor desert locust pests that can be integrated in two types of crisis, such as early warning and situational. It makes use of Citizen Contributed Data technology by using FAO data and correspond to the information needs related to “Climate Crisis & adaptation”. 4. Indicator #4: Flood risk aims at assessing the risk of inundation in coastal areas due to sea level rise. It makes use of OSM Overpass API technology and corresponds to the information needs related to “Climate Crisis & adaptation”. 5. Indicator #5: Real estate builds on the Urban Heat data that have been generated in the Health Indicator (indicator #1) to lead to a new added value product. It combines urban heat measurements during winter periods with socio-economic information (population density, age, real estate price…) and with information on the building’s construction material, history of changes and civil works from the French National Building Database (BDNB). It enables us to map the areas with people in high energy vulnerability. It makes use of Citizen Contributed Data technology and correspond to the information needs related to “Emerging energy crisis” and “Green Transition and the European Green Deal”. In conclusion the integration of EO, geospatial and citizen contributed data, innovative cross cutting technologies and cloud platform technologies to support address information needs in context of the presented societal challenges has shown the potential in helping the authorities and decision makers in many thematic areas given the versatility of the presented results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: StacLine : new QGIS Plugin for diving into STAC Catalogs

{tag_str}

Authors: Fanny Vignolles, Florian Gaychet, Vincent Gaudissart, Mélanie Prugniaux
Affiliations: CS Group
Geographic Information Systems (GIS) have become fundamental for analyzing and visualizing geospatial and temporal data across diverse domains, including environmental monitoring, disaster response, hydrology, urban planning, and agriculture. The availability of Earth Observation (EO) data has significantly increased in recent years, thanks to open-access data initiatives and advancements in satellite missions such as Sentinel, Landsat, and SWOT. However, while the datasets have become more accessible, the tools required to process and integrate them efficiently remain a challenge. The introduction of SpatioTemporal Asset Catalogs (STAC) as a data standard has revolutionized how datasets are organized and distributed. STAC provides a unified framework for describing, managing, and sharing spatiotemporal data through catalogs linked to geospatial servers. When combined with Open Geospatial Consortium (OGC) standards like Web Map Service (WMS), STAC enables seamless geospatial data management and interoperability. This project focuses on bridging the gap between STAC-based data catalogs and GIS workflows by developing a QGIS plugin that integrates STAC with the open-source GIS environment. The plugin simplifies data search, filtering, and visualization while adhering to both STAC and OGC standards, providing professionals and researchers with an efficient tool for managing EO data. Despite the increasing adoption of STAC-based data catalogs, their integration with GIS platforms remains a significant challenge. Existing plugins in QGIS for handling STAC data are limited, offering only basic functionalities and lacking the advanced capabilities required for sophisticated workflows. Current solutions often restrict users to viewing dataset footprints without allowing interactive visualization or the ability to style data layers dynamically. Additionally, these tools frequently require manual downloads and subsequent imports into QGIS, making the process inefficient and prone to user errors. Beyond these technical limitations, ensuring that a tool remains accessible and intuitive for a diverse audience is equally critical. Furthermore, achieving seamless interoperability between STAC and OGC protocols, particularly in the context of integrating WMS for real-time visualization, adds another layer of complexity. To address these challenges, we have developed a QGIS plugin that brings significant innovations to enhance filtering capabilities, simplify data import, and ensure interoperability. Designed with an intuitive interface, it strikes a careful balance between user-friendly simplicity for non-experts and the advanced functionality required by researchers and field practitioners. By incorporating ontological approaches, the plugin enables more precise and efficient dataset discovery. The integration of WMS protocols facilitates automatic data import, allowing users to preview datasets and dynamically apply visualization styles directly within QGIS. These styles, derived from metadata and cartographic servers adhering to OGC standards, provide tailored renderings suited to specific analytical needs. The plugin’s strict adherence to STAC standards aims to promote a compatibility with any STAC-compliant catalogue, enhancing its ability to integrate seamlessly into diverse geospatial platforms and workflows. The user interface has been designed to accommodate both novice and expert users, offering advanced configuration options for customized workflows without sacrificing simplicity. This combination of advanced functionality and ease of use positions the plugin as an essential tool for professionals relying on Earth Observation data, reducing the barriers to integrating STAC data into GIS projects. The current version has been implemented for the HYSOPE II project (CNES), the dissemination platform dedicated to SWOT products and generally to all kind of hydrological datasets and is intended to be extended to other initiatives. As the STAC ecosystem evolves, the plugin is designed to adapt and grow, incorporating new features and responding to user needs. One planned enhancement is the addition of a dynamic timeline feature, allowing users to explore temporal patterns in datasets interactively. This timeline will enable quick identification of dense data availability periods and improve usability for time-series analysis by rendering layers adaptively based on the selected temporal range. Additionally, we also envision the development of an adaptive form system that dynamically configures itself based on search parameters, which may be specific to each dataset. This automatic configuration will leverage the filtering extension and the queryables of the STAC API. This plugin, named QGIS StacLine, represents a significant advancement in democratizing access to STAC-based geospatial data. By addressing the limitations of existing tools and focusing on usability, interoperability, and scalability, it bridges the gap between complex EO data catalogs and practical GIS applications. Looking ahead, the development of the plugin involves a key decision: whether to focus on niche, closed-use cases for tailored solutions or to expand its scope for broader application across diverse projects. While an open approach offers versatility, it risks diluting the specificity and focus of the tool. Regardless of its future direction, the plugin stands as a vital resource for the geospatial community, enabling seamless access and utilization of the growing wealth of spatiotemporal data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.08.07 - POSTER - Ocean Health including marine and coastal biodiversity

Ocean Health, defined as the Ocean’s condition allowing to continuously provide services for Humans in a sustainable way, while preserving its intrinsic well-being and its biodiversity, is under considerable threat. Decades of pollution, overexploitation of resources and damaging coastal environment use have severely degraded the condition of both coastal and offshore marine ecosystems, compromising the Oceans capacity to provide their services. This degradation is being further exacerbated by Climate Change whose effects on Oceans are numerous. The many sensors on-board currently operating satellites (Altimeters, Radiometers, Scatterometers, Synthetic Aperture Radars, Spectrometers) have high relevance for Ocean Health and Biodiversity studies, providing continuous, global and repetitive measurements of many key parameters of the physical (temperature, salinity, sea level, currents, wind, waves) and biogeochemical (Ocean Colour related variables) marine environment, including also high resolution mapping of key marine habitats (coral reefs, kelp forests, seagrass,…). In this context, this session welcomes contributions demonstrating how satellite data can be used to better monitor Ocean Health, including the retrieval of Essential Biodiversity variables and the estimations of the many different stressors, also including marine litter, impacting Ocean Health and marine and coastal biodiversity. Single sensors capability is even more amplified when used in synergy with other space and in-situ measurements, or together with numerical modelling of the physical, biogeochemical, ecological ocean state, so that the session is encouraging multi sensors and multi-disciplinary studies. The session is also open to contributions demonstrating how EO derived products can be used to support management actions to restore and preserve Ocean Health and the marine and coastal biodiversity.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High Sensitivity Fluorescence Sensor For The Detection Of Dissolved Organic Matter In Coastal Environments

Authors: Giancarlo Bachi, Valter Evangelista, Bruno Tiribilli, Paolo Facci, Marco Carloni, Mirco Guerrazzi, Simone Marini, Paolo Povero, Francesco Massa, Michela Castellano, Federico Falcini, Gian Marco Scarpa, Emmanuel Boss, Patrick Gray, Vittorio Ernesto Brando, Chiara Santinelli
Affiliations: Biophysics Institute, National Research Council (CNR-IBF), Institute for Complex Systems (CNR-ISC), Institute of Marine Sciences, National Research Council (CNR-ISMAR), Earth, Environment and Life Sciences Department (DISTAV), University of Genova, School of Marine Sciences, University of Maine
Dissolved Organic Matter (DOM) in the oceans is a crucial component of the Earth's biogeochemical cycles and a key Ocean Water Quality parameter. DOM can be qualitatively studied through the optical properties (absorption and fluorescence) of its Chromophoric (CDOM) and Fluorescent (FDOM) fractions. FDOM has been used to gain information on DOM composition and origin as well as to trace riverine and pollutant inputs. Laboratory FDOM measurements on discrete samples provide insight into DOM characteristics, but lack spatial and temporal resolution. Recent developments in portable fluorescence sensors have enabled cost-effective, high-frequency in-situ measurements, crucial for monitoring dynamic environments such as estuaries and coastal areas. However, sensors currently on the market exhibit low sensitivity, low versatility, and low signal-to-noise ratio, furthermore most of them only detect fluorescence at one pair of Excitation/Emission wavelengths. Here we present the first data from the prototype of a fluorescence sensor implemented within the framework of the RAISE “Robotics and AI for Socio-economic Empowerment”, Spoke 3. The high sensitivity and flexibility of the sensor make it ideal for manual or continuous use on small and large boats, laboratories with seawater intake, and field activities. Preliminary tests showed good signal linearity, baseline stability, and good correlation with fluorescence from benchtop and portable fluorimeters. The sensor has been tested on coastal marine samples with a high spatial resolution collected onboard the schooner Tara (Tara Ocean Foundation) during the TREC (Traversing European Coastlines) expedition and the R/V Gaia Blu (CNR) during the BioTREC cruise as well as on samples collected seasonally from selected contrasting environments such as large ports, estuaries, and marine protected areas (e.g., Portofino LTER site). The comprehensive dataset obtained was combined with physical and biogeochemical parameters from discrete samples and satellite data to retrieve information on DOM-rich coastal filaments, chlorophyll distribution, anthropogenic inputs and DOM dynamics. The excellent correlation between the sensor signal and satellite chlorophyll retrievals illustrates the challenges in differentiating chlorophyll and CDOM from ocean color alone and that our sensor can be used to help characterize DOM dynamics within coastal filaments and phytoplanktonic blooms and could contribute to the improvement of existing satellite chlorophyll and CDOM algorithms. Future uses of the sensor span from monitoring the impacts of coastal pollution in real-time to supporting long-term studies on DOM variability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Blending PlanetScope and Sentinel-2 satellites to assess subtidal seagrass meadows threatened by water quality

Authors: Mar Roca Mora, Carlos Eduardo Peixoto-Dias, Manuel Vivanco-Bercovich, Chengfa Benjamin Lee, Sergio Heredia, Isabel Caballero, Gabriel Navarro, Paulo Horta, Alessandra Fonseca
Affiliations: Instituto de Ciencias Marinas de Andalucía (ICMAN-CSIC), Universidade Federal de Santa Catarina (UFSC), Universidad Autónoma de Baja California (UABC), German Aerospace Center (DLR)
Seagrass meadows provide important ecosystem services, acting as nutrient and sediment traps, burying carbon into the soil, improving the water quality and attracting biodiversity: essential indicators of Ocean Health. These ecosystems play a crucial role buffering impacts, especially in coastal lagoons, increasingly threatened by eutrophication and anoxia worldwide. Their highly sensitivity to environmental changes enables their use as bioindicators of the environmental status, both in the water column and sediments. This study case focuses in a subtropical coastal lagoon in Brazil, which experienced a dystrophic status due to a massive wastewater plant explosion in 2021, impacting Ruppia maritima and Halodule wrightii seagrass species. Earth Observation techniques and computational capacity have evolved in better monitoring of marine macrophytes and water quality variables. However, subtidal seagrass mapping in turbid waters and low dense meadows through Earth Observation is still a challenge. This study blends the advantages of Sentinel-2 L1C at 13-band spectral resolution and PlanetScope 3B imagery, both Classic (4-band) and Super Dove (8-band) at 3-m spatial resolution. A multi-sensor approach with high spectral and spatial resolution to better detect small seagrass patches and its changes in shallow turbid coastal waters. In parallel, to synoptically assess the water quality impact in the coastal lagoon, we processed the Sentinel-2 time-series (2016-2024) through ACOLITE to obtain ocean color related variables such as chlorophyll-a, Suspended Particulate Matter (SPM) or diffuse attenuation coefficient (Kd490), identifying the wastewater disruption in the lagoon, moved from eutrophic to dystrophic; as well as the spatial patterns. The HydroLight radiative transfer numerical model was computed in Case 2 waters using the biogeochemical conditions of the lagoon to mask optically deep waters, limiting light penetration at 1-meter depth due to high backscattering (Kd = 0.4). To obtain the benthic habitat mapping, we first processed Sentinel-2 and PlanetScope imagery using ACOLITE obtaining corrected water leaving reflectance for both sensors. To assess the changes and the related seagrass extent uncertainty, we used three Sentinel-2 images for each year, 2018 and 2024, respectively, which were co-registered to the 3-meter PlanetScope grid, combining both sensors’ advantages into six multi-band raster. In the field, we performed two in situ campaigns in 2018 and 2024 summers, which seagrass presence and absence GPS locations were used to trained a machine learning Random Forest classifier and to validate results in each multi-sensor raster. The three classifications for each year were combined to obtain the seagrass extent map. From the optical perspective, the Red-Edge band for both sensors revealed the highest feature significance for the model, as well as the Depth Invariant Index (DII) produced. The resulting seagrass change map showed a decline of a 65% total seagrass extent 3 years after the disaster, with spectral similarity between seagrass species that could not be differentiated due to the complex inherent optical properties of the water column. This multi-sensor cost-efficient and synoptic approach enables to understand the severity of the impact and provides a tool to monitor the recovery capacity of this marine environment, preserving its intrinsic well-being and benefits for the coastal community.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Estimating uncertainty while detecting marine litter from Sentinel-2 imagery

Authors: Samuel Darmon, Emanuele Dalsasso, Devis Tuia
Affiliations: ECEO, EPFL
Marine plastic pollution is a major global crisis with profound environmental and ecological implications for society. With floating material aggregating under the effect of oceanic processes, so-called windrows reach sizes visible from space and can be used as proxies of marine litter. Within this context, large-scale mapping of marine litter can be enabled by leveraging medium-resolution remote sensing satellite data such as Sentinel-2. Thanks to the recent creation of labeled datasets of optical images containing marine litter, deep learning-based detection of floating debris has emerged as a promising tool to monitor and mitigate marine litter. However, the performance of existing models is hampered by visual ambiguity of objects (often related to spatial and spectral resolution), the presence of clouds and other artifacts. This limits the use of current machine learning approaches in critical scenarios, such as the planning of cleanup operations. To address this gap, we explore the use of uncertainty estimation techniques to provide insights into the model’s decisions and potential failures, by measuring the epistemic uncertainty of the model. In particular, we compare two uncertainty estimation methods: deep ensembles and ZigZag. Deep ensembles consist in training several independent networks with different initializations: variations on the models’ predictions on the same input sample indicate low confidence. While this approach requires several training and inference steps to produce an uncertainty measure, ZigZag is a general framework that reduces the computational load by performing a single training with minor modifications on the model architecture. It builds on the following strategy: a deep learning model is trained to produce the exact same prediction in two cases, whether or not the true label is provided as additional input to the network. At inference, a model which is confident of its prediction will produce an output close to the true label which, once provided as additional input to the same model, will lead to a similar prediction. The distance between the two predictions can then be used as an uncertainty measure. We investigate the use of deep ensembles and ZigZag to a U-Net semantic segmentation model trained for marine debris detection. We train our models on several Sentinel-2 satellite imagery annotated datasets including FloatingObjects, Marine Debris Archive (MARIDA), and S2Ships datasets and evaluate the performance of the models on a subset of MARIDA. The evaluation framework includes both a visual comparison of the predicted uncertainty maps and a quantitative assessment, mostly to study how the uncertainty correlates with the classification error. Our results indicates that uncertainty maps exhibit salient, characteristic patterns. Not only the model struggles to precisely delineate windrows’ borders, as it was expected, but it also associates high uncertainty to cloud patches characterized by thin linear shapes, as well as on wakes, the pattern of waves created by boats moving through the water. Interestingly, the boats themselves are associated with a low uncertainty: we argue that the correct classification of boats is due to the use of the S2Ships dataset during training, where static ships are used as hard negatives for the model. This study investigates the insights brought by uncertainty estimation methods applied to the problem of deep learning-based segmentation of floating debris on multispectral Sentinel-2 data. Our results indicate that the estimation of the model’s uncertainty helps to assess the reliability of predictions and leads to a better understanding of the model limitations. Uncertainty estimation methods enhance the interpretability of the outputs of marine debris detection models without loss of performance, aiding in the identification of error-prone predictions and providing a measure of trust in the outputs of the detector. Future works will study how the uncertainty correlates with the different spectral bands, especially when only visible and near-infrared bands are available, as it is the case for some optical sensors such as PlanetScope data. This will provide us with some insights on the expected uncertainty linked to the integration of PlanetScope data in the detection model.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Phytoplankton biodiversity from spaceborne radiometry in coastal regions

Authors: Héloïse Lavigne, Lumi Haraguchi, Dimitry van der Zande, Joppe Massant, Véronique Creach, Jukka Seppala, Sanjina Upadhyay, Hans Jakobsen, Therese Harvey, Felipe Artigas, Maialen Palazot, Yolanda Sagarminaga, Ioannis Tsakalakis, Natalia Stamataki, Laura Boicenco, Oana Vlas
Affiliations: Royal Belgian Institute Of Natural Sciences, SYKE, CEFAS, Aarhus University, NIVA, CNRS, AZTI, HCMR, NIMRD
Coastal ecosystems are often impacted by human activities, and it is fundamental to assess the rapid shifts in their water quality and biodiversity status. Remote-sensing observations provide rapid and synoptic data that have proved to be extremely useful to assess parameters such as the concentration in chlorophyll-a or the turbidity. Regarding biodiversity, remote sensing has also been used, especially in open ocean waters, to retrieve some phytoplankton groups or pigments. Indeed, main phytoplankton groups are retrieved in open ocean waters (optical Case I waters) from space thanks to multispectral or hyperspectral ocean color sensors. The main types of groups explored are in fact size groups or functional groups. Phytoplankton size groups provide information on the trophic status of the full ecosystem and functional groups are particularly relevant for modelers. Most of the algorithms allowing to retrieve phytoplankton types from ocean color data investigate anomalies in water reflectance which could be explained by a particular pigment signature. Although still challenging, this exercise is facilitated in Case I waters as the water reflectance spectra is only resulting from the phytoplankton community (phytoplankton itself and the related organic matter). In optically complex waters (Case II), which concern most of the coastal waters, retrieving water constituents from ocean color observations is much more challenging as the water reflectance signal is also impacted by external inputs like rivers runoff, dissolved substances and resuspended sediments. Then, even retrieving the concentration in chlorophyll-a can be extremely complex. To explore the capabilities of remote sensing in coastal waters eight study areas in the European waters are investigated. They include the Baltic Sea, the Mediterranean Sea, the North Sea, the English Channel, the Atlantic coast and the Norwegian coast. Different algorithms derived from machine learning methods are being tested to retrieve phytoplankton size classes (pico, nano and micro-phytoplankton) and four phytoplankton color groups (red, brown, green and blue-green phytoplankton). These color groups have been chosen as they are expected to be more easily detectable from space. Indeed, two different phytoplankton species with a close pigment signature might be very difficult to differentiate. The objective is then to determine if phytoplankton groups as defined above can be retrieved in coastal waters and if yes, we will determine if a unique algorithm could be used for these different regions. This work is supported by the project OBAMA-NEXT (H2020 project 101081642).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unveiling Suspended Particulate Matter Dynamics and Environmental Drivers in European Coastal Waters Using Machine Learning and Satellite Data

Authors: Corentin Subirade, Cédric Jamet, Roy El Hourany
Affiliations: Université Littoral Côte d’Opale
Remote sensing of Suspended Particulate Matter (SPM) is essential for water-quality monitoring as it influences turbidity, light availability, and nutrient transport. Coastal ecosystems act as important interfaces between land and ocean, exhibiting high spatial and temporal SPM variability. Their ecological, societal, and economic value makes them very sensitive to natural and human-induced environmental changes. This study provides a comprehensive assessment of the mechanisms driving SPM spatio-temporal variability in European coastal waters, for the period 2016-2025, utilizing the Ocean and Land Color Instrument (OLCI) Copernicus Level-3 Remote Sensing Reflectance (Rrs) product. The semi-analytical algorithm of Han et al. (2016) was applied to the OLCI Rrs data to estimate SPM concentrations. The generated product was validated through a matchup exercise in the diverse French coastal waters (n = 71, Bias = -27%, Error = 63%, Slope = 0.85). Across European coastal waters, SPM concentrations are influenced by dynamic ocean circulation patterns and interactions between the atmosphere, ocean, and land. To investigate the drivers of SPM in this vast marine domain, we implemented a machine-learning-based two-step procedure. First, European coastal waters were classified into regions based on SPM seasonal cycles using a Self-Organizing Map combined with a Hierarchical Ascending Clustering method. This classification resulted in 10 distinct regions, ranging from clear offshore waters with relatively low SPM values throughout the year (< 0.5 g.m-3), to turbid estuarine areas with higher SPM concentrations peaking on average in winter (> 5 g.m-3). SPM seasonal cycles per class presented substantial differences, both in magnitude and shape. Second, SPM concentrations within each class were modeled using a random forest approach with reanalysis environmental variables including wind, waves, currents, and sea surface density (SSD). The contributions of these variables to SPM variability were evaluated using a feature-permutation method, enabling an analysis of the spatial and temporal variability of their influence. Contributions were scaled by the percentage of explained variance by the random forests in each class. Wind and waves emerged as dominant drivers in low-bathymetry regions, accounting for 24% and 19% of SPM variability, respectively, at the European scale. In contrast, SSD significantly influenced areas impacted by river plumes, explaining 23% of SPM variability within these regions. Current speed showed a relatively minor contribution, not exceeding 4% at the continental scale. This clustering-based approach offers a valuable framework for assessing future changes in water quality and SPM dynamics, providing an objective foundation for the management of marine ecosystems across Europe.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Oceanic Primary Production Estimates: Integrating Satellite Data, Vertical Dynamics, and BGC-Argo Observation

Authors: Marine Bretagnon, Quentin Jutard, Philippe Bryère, Julien Demaria, Antoine Mangin
Affiliations: ACRI-ST, ACRI-ST, Site de Brest, quai de la douane
Oceanic primary production (PP) converts sunlight, carbon dioxide, and nutrients into organic matter through photosynthesis. This process forms the foundation of the marine food web, supporting fish stocks and other marine life. It also plays a critical role in regulating the Earth's climate by absorbing large amounts of atmospheric carbon dioxide, a key greenhouse gas. Healthy primary production is essential for maintaining ecosystem balance, sustaining biodiversity, and providing resources for fisheries, which millions of people globally depend on for food and livelihood. However, despite primary production being a critical parameter, in situ measurements remain limited. This is primarily due to the need for incubation during measurements and the lack of a standardized protocol for their execution. Satellites equipped with ocean colour sensors, measure the light reflected by the ocean surface to estimate the concentration of chlorophyll-a, a pigment in phytoplankton. By combining chlorophyll-a data with environmental factors like light availability and sea surface temperature, models can estimate primary production at a global scale. These satellite-derived estimates provide a comprehensive and continuous view of primary production patterns, offering valuable insights into ecosystem dynamics, climate interactions, and the sustainability of marine resources. In this study, we will investigate the vertical component of primary production, which is key for understanding the full depth-integrated dynamics of oceanic productivity. We will compare the vertically integrated PP results obtained with a range of data sources. This includes in situ bottle measurements, data from Biogeochemical-Argo (BGC-Argo) floats, ocean colour data (without direct access to vertical information), and outputs of the SOCA machine learning model (https://data.marine.copernicus.eu/product/MULTIOBS_GLO_BIO_BGC_3D_REP_015_010/description). This analysis will be conducted using established algorithms, mixing observational data and model outputs. This cross-comparison of approaches will unlock new opportunities for improving the accuracy of primary production estimates from remote sensing ocean colour. These advancements will deepen our understanding of primary production, its role in global carbon cycling, its support of marine ecosystems, and its influence on climate change predictions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact of Marine and Atmospheric Heatwaves on Intertidal Seagrass: Experimental Spectroradiometry and Satellite-Based Insights

Authors: Simon Oiry, Bede Davies, Philippe Rosa, Laura Zoffoli, Anne-Laure Barillé, Nicolas Harin, Pierre Gernez, Laurent Barillé
Affiliations: Institut des Substances et Organismes de la Mer, ISOMer, Nantes Université, UR 2160, F-44000, Consiglio Nazionale delle Ricerche, Istituto di Scienze Marine (CNR-ISMAR), 00133, Bio-littoral, Immeuble Le Nevada, 2 Rue du Château de l’Eraudière
Seagrasses meadows are important coastal ecosystems, serving as crucial habitats for marine biodiversity, stabilizing sediments to mitigate erosion, and acting as significant carbon sinks in global climate regulation. However, these vital ecosystems face mounting threats from climate change, with the intensification and increased frequency of marine and atmospheric heatwaves posing profound risks to their health and functionality. This study investigates the impact of these extreme thermal events on the intertidal seagrass, employing a combination of laboratory-controlled experiments and satellite-based remote sensing to capture changes in spectral reflectance and assess the broader ecological implications. During laboratory experiments, seagrass of the species Zostera noltei were exposed to controlled simulated heatwave conditions to assess the physiological and structural impacts of extreme thermal stress. The experimental design involved placing seagrass samples in intertidal chambers that simulated natural tidal cycles, allowing to closely regulate temperature conditions during both high and low tides. One chamber served as a control, maintaining typical seasonal temperatures, while the other was used to simulate heatwave conditions with progressively increasing air and water temperatures to mimic an actual heatwave event observed in the field. Hyperspectral reflectance measurements were recorded at regular intervals during low-tide to monitor changes in the seagrass over time. Heatwave exposure resulted in a significant reduction in reflectance, particularly in the green (around 560 nm) and near-infrared (NIR) regions of the spectrum. This decline in reflectance was closely linked to visible leaf browning, suggesting alterations in pigment content, in the internal structure of seagrass leaves and their overall vitality. The green reflectance decline indicates a reduction in the plant's health, while changes in NIR reflectance often relate to the internal arrangement of cells and air spaces, which are sensitive to heat-induced damage. Vegetation indices like the Normalized Difference Vegetation Index (NDVI) and the Green Leaf Index (GLI), which are indicative of the overall health and structural integrity of vegetation, showed marked decreases under heatwave conditions, with NDVI dropping by up to 34% and GLI by 57%. This significant reduction highlights the adverse effects on the seagrass's ability to maintain its normal physiological processes under heat stress. To quantify these changes more effectively, a novel Seagrass Heat Shock Index (SHSI) was developed. The SHSI, applicable on emerged seagrass, was particularly effective in detecting the transition from green leaves to darkened, stressed leaves, providing a sensitive and reliable tool for assessing thermal stress effects on seagrass. By focusing on specific changes in reflectance across certain spectral bands, the SHSI allowed for a clear differentiation between unimpacted and impacted vegetation, capturing the onset of heat-induced stress with high accuracy. This sensitivity makes the SHSI valuable for early intervention, enabling managers and researchers to identify vulnerable seagrass meadows before substantial damage occurs, thereby facilitating more timely conservation measures. Complementing the experimental data, Sentinel-2 observations provided clear evidence of the effects of a documented heatwave event in South Brittany, France, on natural seagrass meadows. These intertidal zones were exposed to extreme air temperatures up to 32°C for more than 13.5 hours per day, leading to significant leaf darkening, which affected up to 24% of the meadow's area. The satellite-derived SHSI indicated a strong spatial correlation between prolonged heat exposure and areas experiencing spectral darkening, highlighting the susceptibility of intertidal seagrasses to extended thermal stress. Spatial analysis showed that darkening was especially pronounced in the higher intertidal regions, where seagrasses were exposed to air for longer durations during low tide, underlining the interaction between tidal exposure and thermal stress. However, Sentinel-2 data one month after the heatwave, showed partial recovery in some areas, suggesting a certain level of resilience in Zostera noltei. Despite this recovery, the seagrasses that had experienced the most severe darkening did not fully return to their original state, indicating that while seagrass meadows have some capacity for resilience, prolonged or repeated thermal stress can have lasting impacts, particularly in the more exposed intertidal regions. This highlights the need for focused conservation efforts to support their recovery and enhance their resilience in the face of increasing climate-driven thermal extremes. The combination of laboratory-controlled experiments and satellite-based remote sensing provided a comprehensive understanding of the impacts of heatwaves on seagrass meadows at multiple scales, from individual leaf-level responses to entire meadow-wide effects. This integrated approach highlights the potential of leveraging both detailed local observations and large-scale satellite data to effectively monitor ecosystem changes, offering valuable insights for the management and conservation of these vulnerable habitats. The study underscores the critical role of spectral reflectance as an early warning indicator of heatwave-induced stress, laying the foundation for remote sensing-based monitoring and conservation efforts. By employing innovative indices like the Seagrass Heat Shock Index (SHSI), we capture subtle yet ecologically significant changes, advancing the precision of habitat assessments in intertidal zones. As climate scenarios predict more frequent and intense heatwaves, the need for continuous monitoring of intertidal seagrass meadows becomes increasingly urgent. This research demonstrates the efficacy of remote sensing in capturing rapid environmental changes, providing a framework for mitigating the impacts of climate-driven stressors and calling for adaptive conservation strategies. These strategies should integrate advancements in remote sensing technologies with targeted field-based interventions to preserve the resilience of intertidal seagrass meadows, thereby addressing the escalating challenges posed by climate change and ensuring the continued health of these indispensable coastal habitats.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote Sensing of the German North Sea Coast: A Review

Authors: Karina Alvarez, Dr. Felix Bachofer, Dr. Claudia Kuenzer
Affiliations: University of Wuerzburg, German Aerospace Center
The German North Sea coast is of immense economic, cultural, and environmental importance. It includes a portion of the Wadden Sea World Heritage Site, which extends into the Netherlands and Denmark and represents the largest system of tidal flats in the world. Monitoring of such important and sensitive habitats is critical for their informed management, and even more so in the face of a changing climate. Remote sensing (RS) provides an opportunity for consistent, low-cost monitoring, especially for difficult-to-access portions of the Wadden and North Sea. To date, however, no comprehensive review of RS applications for this area has been conducted, limiting its application for effective monitoring. This study summarizes RS efforts and findings in this region, as well as identifies gaps and opportunities. We conducted a literature review for the years 2000 to August 2024 that yielded 102 papers. We found that these papers ranged from measuring singular physical and biogeochemical metrics to habitat classifications and ecosystem integrity assessments. Papers could be grouped into four main research categories: coastal morphology (32%), water quality (31%), ecology (28%), and sediment (8%). Studies on intertidal topography were the most numerous, making up nearly 20% of papers. In the categories of water quality, ecology, and sediment, SST and chlorophyll, bivalves, and sediment transport were the main focuses, respectively. Over half of papers (64%) use satellite remote sensing, whereas about a third use airborne remote sensing. Multispectral data was by far the most commonly used data type in these studies, followed by SAR. Studies considered in this review span a wide range of spatial scales and resolutions, revealing that the two are generally inversely correlated. Further, coastal morphology and ecology studies clearly cluster in high spatial resolution and small extents compared to water quality and sediment studies, which generally use lower spatial resolution but over larger study areas. Gaps identified in this review include coastal morphology and ecology studies at larger spatial scales, especially at scales that align with management areas such as the German Wadden Sea National Parks. Additionally, higher spatial resolution water quality studies, especially important in highly variable areas such as coastal zones, would help better characterize the highly dynamic nature of water quality in this area. Studies beyond this study area suggest that, with novel machine learning methods and advances in processing power, satellite RS has high potential to fill these gaps. This review finds that RS, and especially satellite-based RS, already plays a notable role in monitoring of the German North Sea coast and will likely continue to play a role in providing critical information for coastal managers.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Information content analysis of hyperspectral data for identification of microalgae and cyanobacteria species: from laboratory experiments to PRISMA and EnMAP satellite applications for super blooms monitoring

Authors: Pierre Gernez, Dr. Tristan Harmel, Victor Pochic, Dr. Martin Hieronymi, Dr. Maria-Laura Zoffoli, Dr. Thomas Lacour, Dr. Amalia Sacilotto Detoni, Dr. Antonio Ruiz-Verdù, Dr. Ruediger Roettgers
Affiliations: Nantes University, Magellium, Helmholtz-Center Hereon, Consiglio Nazionale delle Ricerche, Ifremer, University of Valencia
Optical sensing of the aquatic environment relies on the radiation exiting the water body measurable by radiometers above the water surface or on-board satellites. Light propagation within the water column is controlled by the balance between absorption and scattering eventually leading to the water-leaving radiance. It is common practice to approximate the radiative transfer equation by non-linear equations relating the water-leaving radiance (or the remote sensing reflectance) to a ratio of the bulk absorption and backscattering coefficients. In the case of phytoplankton « super blooms » (i.e. highly-concentrated blooms typically dominated by one or two microalgal species), the absorbing pigments of the microalgae or cyanobacteria greatly alter the water-leaving radiance with conspicuous discoloration visible from space. Over decades, numerous efforts have been made to collate and document the species-specific absorption properties of the phytoplankton in relation to their absorbing pigment concentration. In contrast, far less data have been collected and analysed concerning the phytoplankton scattering properties. The objectives of this study were twofold: first to analyse how the taxonomical determination of the bloom-dominating taxon could be retrieved from the absorption spectrum, and second, to assess the importance of the scattering properties and their impacts on the detectable water-leaving radiance to identify dominant species in super blooms or very productive waters from space. To achieve the absorption-related objective, a dataset of 164 hyperspectral absorption measurements of bloom-forming species was obtained from monospecific culture data, compiling published and new measurements. The level of taxonomic information amenable to absorption-based analysis was assessed, and compared with pigment-based classification. In particular, the ability to distinguish dinoflagellates from diatoms, prymnesiophytes, and raphidophytes was demonstrated. This is an important result because Dinophyceae are known for their ability to form super blooms, and recognized to be notoriously challenging to discriminate from other phytoplankton classes. To address the scattering-related objective, the analysis was based on innovative measurements of the volume scattering function (from LISST-VSF) and hyperspectral backscattering coefficient (from the HiFi-bb, a newly developed instrument to measure the backscattering coefficient at hyperspectral resolution) obtained for several species from a new laboratory experiment. The absorption impact on scattering (anomalous dispersion) was first investigated. Then, the data set was used within radiative transfer and its backscattering-absorption ratio approximation to study the impact of species-specific scattering and potential identification of dominant species. Case studies and validation are discussed based on applications to in situ and PRISMA/EnMAP satellite hyperspectral data as a demonstration for potential global applications to the next operational hyperspectral missions such as CHIME (ESA) and SBG (NASA). Altogether, this study contributed to quantifying the optimal potential of hyperspectral remote sensing to identify super blooms events in the absence of field information.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Analyzing Satellite Scaling Bias Using Drone Data: Application to Microphytobenthos Studies

Authors: Augustin Debly, Bede Ffinian Rowe Davies, Simon Oiry, Julien Deloffre, Romain Levaillant, Jéremy Mahieu, Ernesto Tonatiuh Mendoza, Hajar Saad El Imanni, Philippe Rosa, Laurent Barillé, Vona Méléder
Affiliations: Nantes Université, Institut des Substances et Organismes de la Mer, ISOMer, UR 2160, F-44000 Nantes, France, Univ Rouen Normandie, Univ Caen Normandie, CNRS, M2C, UMR 6143, F-76000 Rouen, France
Microphytobenthos (MPB) are microalgae that form biofilms on sediment surfaces, playing a crucial role in coastal ecosystems. They contribute significantly to food web support, carbon (CO₂) fluxes, and the stabilization of mudflats. Traditionally, MPB assessments have been conducted through in situ measurements. However, in recent years, remote sensors have increasingly been used for monitoring MPB, including the use of satellite imagery. While satellites offer a broad spatial and temporal coverage, they also present challenges, particularly regarding the "scaling bias." This bias arises from differences in observations due to the spatial resolution of the data, which can lead to discrepancies in ecological metrics derived from satellite data. One key area affected by scaling bias is the estimation of carbon fluxes, which can be derived from MPB biomass. These estimates often rely on Gross Primary Production (GPP) models, which use the Normalized Difference Vegetation Index (NDVI) as a proxy for biomass. The scaling bias arises from non-linearities in converting NDVI to biomass, combined with the spatial variability of MPB. This study aims to quantify the scaling bias in MPB assessments by leveraging high-resolution drone data, which provide a more detailed view of MPB distribution and variability than satellites. Drone surveys were conducted across four coastal sites during different seasons to capture the spatial heterogeneity of MPB. These high-resolution datasets were then used to simulate what satellite sensors would detect at coarser resolutions, assuming a linear averaging between the two scales for NDVI, though this assumption is being further examined and discussed. The conversion from NDVI to biomass was performed using an exponential model. This method addresses the saturation effect of NDVI at higher biomass levels. Biomass estimates were derived at both fine and coarse resolutions, and the scaling bias was determined by comparing the values obtained at these two scales. The results present maps indicating a scaling bias of a few per cent, with coarse-resolution biomass consistently being lower than those calculated at finer resolutions. To model the bias, the spatial structure of MPB-induced NDVI was represented using a statistical beta distribution, defined by two shape parameters. This choice is appropriate as the beta distribution is continuous and bounded. It has been demonstrated that the bias is influenced by the statistical moments of these distributions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: New ocean color algorithms for estimating the surface concentrations of particulate organic nitrogen and phosphorus from satellite observations

Authors: Fumenia Alain, Loisel Hubert, Jorge Daniel, Bretagnon Marine, Mangin Antoine, Bryère Philippe
Affiliations: Laboratoire D'océanologie Et De Géosciences, ACRI-ST
In a context of anthropogenic perturbation to the nitrogen and phosphorus cycle, determination long-term trends and budget of all nitrogen and phosphorus chemical species within the global ocean represents a significant challenge. This study highlights the potential of using inherent optical properties, IOPs, derived from semi-analytical algorithms applied to satellite ocean color observations as proxies for estimating surface mass concentrations of particulate organic nitrogen, PON, and phosphorus, POP, at the global scale. Specifically, the IOPs considered are the absorption coefficients of total particulate matter, ap(λ), and phytoplankton, aph(λ). These IOPs were derived from satellite ocean remote-sensing reflectance, Rrs(), using different available inverse methods Our results reveal that reasonably strong relationships between PON or POP and satellite-derived IOPs hold across a range of diverse oceanic and coastal environments. Both the coefficients ap(λ) and aph(λ) show the ability to serve as proxies for PON and POP across a broad range of environments, from open ocean oligotrophic waters to coastal waters. The validation of the algorithms is based on matchups between an extensive dataset of concurrent in situ particulate organic matter measurements and satellite-derived particulate IOPs. Additionally, comparison with in situ time series over twenty years also shows the good performance of the algorithm in reproducing the temporal evolution of PON and POP. Applying these algorithms to merged product observations provides global PON and POP distribution patterns that agree with the expected geographical distribution of in situ measurements. High PON and POP concentrations are observed in turbid shelf and coastal regions as well as in upwelling areas, while low PON and POP concentrations are observed in oligotrophic regions. The presented relationships demonstrate a promising means to assess long-term trends and/or budgets of PON and POP at the global oceanic scale, or in specific oceanic areas that could be affected by anthropogenic perturbation impacting the production of organic nitrogen and phosphorus. These relationships could be also helpful to gain insight on nitrogen cycling in environment where nitrogen budgets are needed such as hot spots of intense biological N2 fixation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Modeling and Numerical Simulation of Ocean Circulation and Its Impact on Fisheries Resources: A Case Study of Northern Morocco

Authors: Hasna BOUAZZATI, Asma DAMGHI, Abdelmounim El M’RINI, Song WEIYU
Affiliations: Research Laboratory in Applied and marine Geosciences, Geotechnics and Geohazards (LR3G), Faculty of Sciences, Abdelmalek Essaadi University, Qingdao Institution of Marine Geology,
Climate change is increasingly altering ocean circulation patterns and upwelling processes, with significant impacts on marine ecosystems and fisheries. These shifts affect the distribution, abundance, and phenology of fish and shellfish, challenging the sustainability of fisheries, particularly in regions like Morocco. This study investigates the mechanisms by which ocean circulation influences key oceanographic factors, such as temperature, oxygen levels, and acidification, and how these changes affect marine species' vital rates—growth, reproduction, and survival. Using high-resolution numerical models, GIS tools, and satellite data, the research simulates the current and future impacts of ocean circulation on fish populations and fisheries resources. The results reveal that climate-induced shifts in ocean circulation are already pushing several key fish species in Morocco to new areas, reducing their accessibility to traditional fishing methods and potentially threatening fish stocks. Certain regions, particularly those heavily reliant on upwelling, are identified as vulnerable hotspots. Future projections suggest continued disruptions to species composition and fishery yields due to warming, acidification, and reduced oxygen levels in the marine environment. This research underscores the need for adaptive fisheries management and resilience-building strategies, integrating climate and oceanographic data into policy-making. The findings highlight the importance of proactive measures to sustain fishery resources and mitigate socio-economic impacts on coastal communities, ensuring the long-term sustainability of Morocco's fisheries in the face of ongoing climate change.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Identifying Phytoplankton Groups From Absorption Spectra – A Regional Approach Based on Data From the Baltic Sea and Estonian Lakes

Authors: Ian-Andreas Rahn, Dr Kersti Kangro, Krista Alikas, Rüdiger Röttgers, Martin Hieronymi, Mr. Rene Freiberg
Affiliations: University Of Tartu, Helmholtz-Zentrum Hereon, Estonian University of Life Sciences
Determining Chl-a from optical measurements and using it as a proxy for total phytoplankton biomass has been ongoing for a long time. However, different phytoplankton groups occupy unique positions in the ecosystem, and thus, there is a need to distinguish them. Some groups, such as the cyanobacteria, cryptophytes, chrysophytes, chlorophytes, dinoflagellates and diatoms, can be discriminated via unique photoactive pigments (markers). These are typically measured using High-Performance Liquid Chromatography (HPLC), an arduous, expensive, and time-consuming process. Obtaining information about the presence and concentration of ’marker’ pigments from absorption spectra would allow for quicker and cheaper analysis of the phytoplankton dynamics. It will also assist in developing further algorithms for future hyperspectral satellite missions, such as the CHIME (Copernicus Hyperspectral Imaging Mission) and ongoing ones, such as PACE (Plankton, Aerosol, Cloud, ocean Ecosystem). Here, different approaches to derive pigment concentration have been undertaken – a chl-based model, a Gaussian decomposition model, and a model based on principle component analysis (PCA). The pigment concentration was also linked with the measured biomass. The analysis relies on the data gathered during a research cruise on the Baltic Sea and special optical monitoring campaigns of Estonian lakes. Developing a combined model for both types of water bodies, which could be used within Estonia and around the various coastal areas of the Baltic Sea, has been explored. Preliminary results indicate that different models are better at distinguishing various pigments. The Gaussian decomposition model has shown an overall better performance than the other models, especially for photoprotective carotenoids (PPC) derived from absorption at 498 nm (r2=0.86). Meanwhile, the chl-a model showed promising results for discerning zeaxanthin (r2=0.72), a key pigment in cyanobacteria. The strengths and limitations of each model have been discussed, contributing to the advancement of phytoplankton identification techniques. Implications for phytoplankton group biomass have also been highlighted. These results can be used as inputs for future algorithms for hyperspectral satellite missions (e.g. CHIME) in distinguishing phytoplankton in coastal areas and inland waters.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Trialing Real-Time Global Marine Litter Monitoring With Edge-SpAIce Project

Authors: Dr. Andis Dembovskis, Dr François De Vieilleville, Dr Pauline Audenino, Mr. Sioni Summers, Dr. Kikaki Katerina, Mr. Boyan-Nikola Zafirov
Affiliations: AGENIUM Space, CERN, NTUA, ENDUROSAT
Human health relies on the health of oceans surrounding the land we all live on. Beyond serving as a global oxygen supplier and carbon-dioxide absorber [1], ocean serves also as a critical source of the food supply chain. With tons [2] of marine plastic being dumped in oceans annually, nano plastics make its way to food humans are eating [3]. And it gets worse with each year [2]. To care about human health means to care about ocean health. Edge-SpAIce project was created to provide real-time insight of surface plastic littering in oceans, sees and rivers, aimed to build global EO capability to provide pinpointing services of such pollution sources for environment policing agencies. Building such service relies on the following main pillars: 1) EO image gathering capability with the spectral resolution enabling plastic signature detection; 2) Onboard edge-AI computing capacity to do such image analysis; 3) Onboard-capable DNN trained to do the detection and able to do so at a commercialize quality; and finally 4) operational capacity of satellite operator to execute it. While individual bits and pieces have been present before, e.g. EO imagery with multi- and hyper- spectral data and DNNs to do plastic detection on ground [4], a complete space-based service has never been trialed before. Edge-SpAIce is a project funded by Horizon Europe program and aims to demonstrate a trial of such a service. Consortium is composed of 4 players: Endurosat (EDS) as space platform provider and operator, NTUA as the domain expert in plastic detection, CERN as the expert for optimized AI logic deployment on European FPGA and AGENIUM Space (AGS) as edge-AI technology solution provider plus the project coordinator. The mission was launched in January 2025 on Transporter-12 launcher by Space-X into SSO orbit [5]. The EO instrument onboard is Simera Sense HyperScape 200. Initial months were used for LEOPs and since April it has been open for edge-AI application testing. NTUA has developed a preliminary labelled dataset and ground-DNN for reference and AGS has built a distilled, quantized and architecture optimized DNN version for onboard detection of marine plastic in raw images. This paper evaluates first results of onboard marine plastic litter detection and provides insight in preliminary onboard detection AI execution timing and precision. The execution times are revealed for a DNN run on a space-equivalent engineering model on the ground, with logic deployed on Zynq Ultrascale+ ZU15EG board using VITIS AI versus HLS4ML frameworks. It reviews datasets used to train the DNN and what techniques were applied to enable use of a fused multi-camera sources and optimizing it to fit a SoC-FPGA execution. Furthermore, the paper elaborates in detection benefits of additional onboard pre-processing using the algorithm AGS has developed through ESA’s FutureEO program for PRNU/DSNU calibration with AI, technical details of which are already presented in ESA VH-RODA [6]. Additionally this paper draws an improvement roadmap on further technologies that could help to improve the service in future, incl. specifics of better accustomed EO camera and improved onboard processing hardware options for further missions, e.g. hyperspectral sensor requirements for micro-plastic detections. Finally, it also reviews operational costs and suggests business models for future environmental policing applications. References: [1] United Nations, “The ocean – the world’s greatest ally against climate change”, 2024, https://www.un.org/en/climatechange/science/climate-issues/ocean [2] Hannah Ritchie, “Where does the plastic in our oceans come from?”, 2021, OurWorldinData.org, https://ourworldindata.org/ocean-plastics [3] Elise M. Tuuri, Sophie Catherine Leterme, “How plastic debris and associated chemicals impact the marine food web: A review”, Environmental Pollution, Volume 321, 2023, 121156, ISSN 0269-7491, [4] Kikaki, K., Kakogeorgiou, I., Mikeli, P., Raitsos, D. E., & Karantzalos, K. (2022). “MARIDA: A benchmark for Marine Debris detection from Sentinel-2 remote sensing data.” PloS one, 17(1), e0262247. [5] Transporter 12 reference: https://rocketlaunch.org/mission-falcon-9-block-5-transporter-12-dedicated-sso-ride . [6] Dr. de Vieilleville François, “Towards DSNU estimation on routine images”, ESA VH-RODA poster session, 2024.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluation of PRISMA Water Reflectance for the Validation of Biogeochemical Models

Authors: Giuliana Profeti, Paolo Lazzari, Giorgia Manfè, Eva Álvarez, Gian Marco Scarpa, Vittorio Ernesto Brando, Luis Gonzalez Vilas, Stefano Ciavatta, Federica Braga
Affiliations: CNR-ISMAR, IEO-CSIC, MOI, OGS
The HE-NECCTON (New Copernicus capability for trophic ocean networks) project aims at building a fully integrated modelling system of the marine ecosystem for describing its functioning, predicting the impact of climate change and human pressure, supporting policymakers in protecting biodiversity and managing resources in a sustainable manner. The modelling system is going to be integrated in the Copernicus Marine Service to obtain reliable and timely ocean products. Novel research biogeochemical models have been upgraded by adding a spectral radiative transfer module describing the distribution of in-water irradiance along the water column and the interaction of optically active substances with the spectral light field. One of the objectives on NECCTON is to integrate spaceborne hyperspectral data in these modelling systems by means of augmented skill-performance metrics and novel assimilation techniques. A prerequisite for successful data assimilation is an accurate estimation of uncertainties in the satellite observations. We present the assessment of water reflectance derived from PRISMA hyperspectral mission in selected aquatic sites, that will be used for biogeochemical model validation and data assimilation. In situ reflectance from autonomous hyper- and multispectral radiometer systems, such as AERONET-OC and WATERHYPERNET, are used to evaluate Standard PRISMA Level 2 (L2C) products distributed by the Italian Space Agency and data derived from two atmospheric correction processors, ACOLITE and POLYMER, adapted for processing PRISMA Level 1 products. The qualitative and quantitative analyses of Remote Sensing Reflectance (Rrs) derived from PRISMA versus in situ radiometric data show consistent results in the longer wavelengths (i.e. from 500nm on), while a relevant overestimation of PRISMA Rrs is observed at the shorter wavelengths for all the applied processors, probably due to lower SNR of PRISMA L1 at these wavelengths. In general, PRISMA L2C products have weaker performances than the other methods. PRISMA data processed with POLYMER and ACOLITE with glint correction have an overall good agreement, with the lowest errors between satellite and in situ measurements in the 490–620 nm spectral interval. The overall bias of POLYMER is close to 0, while ACOLITE shows an overall overestimation of the reflectance spectrum, with improved results when the glint correction is applied. The availability of in situ Rrs data from autonomous systems is fundamental to providing validation data and thoroughly assess the radiometric performance of PRISMA Rrs for any spectral band between 400 and 900 nm. The results over the four water bodies analysed in this study are encouraging, confirming the consistency of PRISMA Rrs and its capability in providing adequate radiometric products for the retrieval of water quality parameters and for the validation in biogeochemical models. For the AAOT site in the Adriatic Sea, we performed a preliminary comparative analysis of Rrs spectra derived from the GOTM-FABM-BFM bio-optical biogeochemical model and observations from multispectral satellite sensors and hyperspectral PRISMA data processed with POLYMER and ACOLYTE. Across all methods, the general spectral shape of Rrs is consistent, with peaks in the green region and Rrs values generally higher in winter, which could be attributed to seasonal variations in water composition. Although the performance of PRISMA products varies by spectral range and correction method, in general they align well with the biogeochemical model and established multispectral sensors in the 500–900 nm range, making it a strong candidate for data assimilation into the HE-NECCTON system.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From pigment prediction to phytoplankton functional type trends with explainable machine-learning

Authors: Angus Laurenson, Shubha Sathyendranath, Victor Martinez-Vicente
Affiliations: Plymouth Marine Laboratory
Life in the ocean is dependent on phytoplankton. These photosynthetic micro-organisms are the foundation of the marine food web - if their distribution shifts, so must the rest of the marine food web shift with it. We seek the answer to the question; “how are phytoplankton responding to climatic changes in their environment?” Phytoplankton adapted to light below the surface by incorporating secondary pigments that shift the absorption spectrum of chlorophyl. These pigments indicate the functional type, and thus their ecological role [1]. We trained a machine-learning model to predict these pigments from remote sensing reflectance using a global dataset of 34,600 High Performance Liquid Chromatography (HPLC) measurements [2] matched to OC-CCI v6.0, 4km daily remote sensing reflectance [3], and GEBCO 2023 bathymetry [4]. The model prediction of chlorophyll-a compared favourably to the OC-CCI. However, careful cross-validation revealed that of seven secondary pigments, only fucoxanthin, and peridinin could be discriminated, corroborating earlier works [5]. We applied this model to the global OC-CCI timeseries to generate an estimate of the pigment predictions between 1993 to 2023, and following the diagnostic pigment formula updated by Sun et al in 2023 [2] , we converted the predicted pigments into predicted diatom fraction. Careful regression of monthly anomalies of diatoms and chlorophyll predicted by the model revealed significant trends, particularly around Antarctica. To link these changes to environmental drivers, we trained a second model to predict these variables directly from a time-series of environmental drivers and tested its performance across a time-series split cross-validation exercise. For the regions it performed well, we use SHapley Additive exPlanations (SHAP) [6] to explain how the time forecast model made predictions and reveal which environment drivers dominate in different regions of the ocean on a per-pixel basis. References ---------------- 1. Vidussi, Francesca, et al. "Phytoplankton pigment distribution in relation to upper thermocline circulation in the eastern Mediterranean Sea during winter." Journal of Geophysical Research: Oceans 106.C9 (2001): 19939-1995 2. Sun, Xuerong, et al. "Coupling ecological concepts with an ocean-colour model: Phytoplankton size structure." Remote Sensing of Environment 285 (2023): 113415 3. Sathyendranath, S.; Jackson, T.; Brockmann, C.; Brotas, V.; Calton, B.; Chuprin, A.; Clements, O.; Cipollini, P.; Danne, O.; Dingle, J.; Donlon, C.; Grant, M.; Groom, S.; Krasemann, H.; Lavender, S.; Mazeran, C.; Mélin, F.; Müller, D.; Steinmetz, F.; Valente, A.; Zühlke, M.; Feldman, G.; Franz, B.; Frouin, R.; Werdell, J.; Platt, T. (2021): ESA Ocean Colour Climate Change Initiative (Ocean_Colour_cci): Version 5.0 Data. NERC EDS Centre for Environmental Data Analysis, 19 May 2021. 4. GEBCO Compilation Group (2023) GEBCO 2023 Grid (doi:10.5285/f98b053b-0cbc-6c23- e053-6c86abc0af7b) 5. Stock, Andy, and Ajit Subramaniam. "Accuracy of empirical satellite algorithms for mapping phytoplankton diagnostic pigments in the open ocean: a supervised learning perspective." Frontiers in Marine Science 7 (2020): 599. 6. Lundberg, Scott. "A unified approach to interpreting model predictions." arXiv preprint arXiv:1705.07874 (2017).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating topographic characteristics and population density in an Antarctic penguin colony using UAV-driven deep learning models

Authors: Oleg Belyaev Korolev, Dr. Alejandro Román, Dr. Josabel Belliure, Dr. Gabriel Navarro, Dr. Luis Barbero, Dr. Antonio Tovar-Sánchez
Affiliations: Institute Of Marine Sciences Of Andalusia, Alcala University, Cadiz University
This study examines the ecological role of chinstrap penguins (Pygoscelis antarcticus) in Antarctica, focusing on their population dynamics, behaviour, and environmental impacts produced by the climate change. Penguins play a key role in nutrient cycling and trace metal dynamics, highlighting the relevance of research efforts in characterizing their local biochemical contributions to the environment. Using UAVs equipped with RGB and multispectral sensors, the study mapped Vapour Col penguin colony on Deception Island, Antarctica, identifying several runoff-discharge points where guano and other materials enter the marine environment, enriching coastal waters with nutrients and trace metals like iron. These findings provide data for establishing environmental sampling stations to better understand nutrient transfer in the Southern Ocean. Additionally, deep-learning models, specifically YOLOv8, were used to estimate population size, yielding a range of 13,250 to 22,000 breeding pairs during the 2021/2022 season. Adjustments were made for late-season data collection by simulating clutch initiation dates, improving accuracy. The study also tested using chick counts as a proxy for adult numbers, offering an alternative for future assessments. Results show a stable population compared to past decades, despite previous declines, suggesting some resilience in this colony. The integration of UAVs and deep learning provides a precise, non-invasive, and efficient way to monitor wildlife. Unlike traditional ground-based methods, which are labour-intensive and disruptive, UAVs captured high-resolution data across remote areas, while deep-learning models processed it to identify individual penguins and map their distribution. The study also linked guano-stained areas to chick presence, showing how spatial analyses can explain habitat use during critical life stages. Beyond population estimates, the research highlights the broader ecological importance of penguin colonies. By enriching local marine ecosystems, they play a role in supporting primary productivity and food web dynamics. Identifying key discharge points where nutrients enter the ocean offers new opportunities for targeted sampling and biochemical studies. These areas are likely hotspots for marine productivity, driven by the inputs from penguin colonies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Biogeography of Arctic phytoplankton groups revealed from 20+ years of pigment data

Authors: Alexander Hayward
Affiliations: Danish Meteorological Institute
The Arctic is undergoing rapid environmental changes, with warming occurring at a rate faster than any other region on Earth. This warming, driven primarily by atmospheric greenhouse gas increases, has resulted in dramatic reductions in sea ice coverage and a shift from multi-year to predominantly first-year ice. These changes have affected Arctic marine ecosystems, including phytoplankton dynamics. Over recent decades, phytoplankton abundances have increased significantly due to longer growing seasons and greater light and nutrient availability. Phytoplankton are foundational to Arctic ecosystem processes and carbon export to ocean depths; however, their ecological contributions are not uniform across taxa. Among Arctic phytoplankton, diatoms are particularly significant due to their high lipid content, which makes them a vital food source for Calanus copepods, a key link to higher trophic levels. Diatoms also play a crucial role in the biological carbon pump, sequestering atmospheric CO2 more effectively than other groups. Despite their importance, there remains a limited understanding of the biogeographical patterns of phytoplankton communities at a circumpolar scale. Addressing this gap, we conducted a collaborative, community-driven analysis, assembling the largest dataset of Arctic phytoplankton pigments to date, derived from high-performance liquid chromatography (HPLC). This dataset comprises over 8,000 samples collected from the mid-1990s to the present and represents diverse Arctic environments, including coastal waters, open-ocean, and ice-covered regions. Using the pigment inversion method phytoclass, we quantified chlorophyll a concentrations for major phytoplankton groups: diatoms, haptophytes, green algae, pelagophytes, dinoflagellates, and cryptophytes. Cluster analysis enabled us to identify distinct phytoplankton community types and map their spatial distributions across the Arctic. This work has critical implications for ocean colour remote sensing, particularly in the context of hyperspectral capabilities from upcoming satellite missions such as NASA’s Plankton, Aerosol, Cloud, and Ecosystem (PACE) mission. Moreover, these findings provide a foundation for modeling phytoplankton group dynamics over time, offering insights into their responses to environmental changes and informing predictions about future Arctic ecosystem conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Cloud Masking for Marine Pollution Detection

Authors: Paraskevi Mikeli, Dr Katerina Kikaki, Dr Ioannis Kakogeorgiou, Mr Simon Vellas, Prof Konstantinos Karantzalos
Affiliations: National Technical University of Athens, Hellenic Centre for Marine Research, Archimedes/Athena RC
Protecting aquatic ecosystems is fundamental to global sustainability, as emphasized by the United Nations Sustainable Development Goal 14 (SDG 14). Marine pollution, including debris and oil spills, remains a critical environmental issue. While satellite-based technologies hold promise for detecting and monitoring marine pollution, operational remote-sensing solutions face significant challenges, particularly in cloud masking over marine environments. Well-established cloud masking algorithms often struggle in marine regions, either underestimating cloud presence (e.g., S2Cloudless) or misclassifying bright sea features as clouds (e.g., FMASK). These inaccuracies can compromise preprocessing steps in marine pollution detection systems, leading to false positives. In particular, for oil spill monitoring systems, we observed that a major source of false positives is existing algorithms mistakenly identifying clouds as oil spills. This study investigates how integrating cloud data into model training can improve the ability to discriminate clouds from marine pollution and other sea surface features using multispectral high-resolution satellite imagery. We rely on the benchmark Sentinel-2 dataset for marine pollution, MADOS (https://marine-pollution.github.io/), in combination with the state-of-the-art deep learning framework MariNeXt for classification. MADOS contains annotations for marine debris and oil spills, as well as water-related classes such as floating macroalgae, ships, and natural materials. To address cloud masking issues, we expand the MADOS dataset by introducing a new “Cloud” class, capturing diverse cloud characteristics such as size, thickness, background, and lighting variations. The augmented dataset includes 10,000 patches and 99 million pixels annotated for clouds. Retraining the MariNeXt model with the enhanced MADOS dataset, we evaluate its performance qualitatively and quantitatively, comparing results to previous studies. In conclusion, incorporating cloud data into model training significantly improves model accuracy and enhances overall sea surface feature classification using multispectral satellite imagery. By expanding the MADOS dataset with cloud annotations, we enhance the accuracy and reliability of marine pollution detection systems. The retrained MariNeXt model demonstrates effective cloud classification capabilities, making it well-suited for operational use. Our findings highlight the necessity of a holistic approach to satellite-based marine pollution monitoring, significantly contributing to global sustainability efforts in line with SDG 14.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using Satellite Data to Assess Sensitive Habitats and the Pressures They Face

Authors: Eeva Bruun, Mr Lauri Niskanen, Dr Aleksi Nummelin, Dr Jenni Attila, Dr Olli Malve, Dr Elina Miettunen, Mr Janne Mäyrä, Mr Mikko Kervinen, Mr Eero Alkio, Mr Tomi Heilala, Mr Markus Kankainen, Mr Mika Laakkonen, Dr Antti Westerlund
Affiliations: Finnish Environment Institute, Natural Resources Institute Finland, Finnish Meteorological Institute
Satellite observations can reliably assess the state of the environment, its temporal and spatial variations, and the characteristics of marine areas. Typically, satellite observations are used to describe variables related to eutrophication, such as the water's chlorophyll-a content, Secchi depth, and turbidity. In this study we assess the usefulness of satellite observations for evaluating the impacts of fish farming, and marine heat waves on marine habitats in the coastal waters of Finland (Baltic Sea). Additionally, Syke is developing methods to identify small boats from satellite observations, which will be used to assess the pressure small boats exert on marine areas annually. We do this by comparing satellite data with regionally comprehensive flow-through data measured from moving vessels, and point-wise laboratory samples from three marine areas along the Finnish coast. We utilize open-source Copernicus and NASA data and evaluate the benefits of commercial Very High Resolution (VHR) images in the areas of field measurement sites. VHR images provide more detailed satellite data closer to the shore, where open-source data cannot be utilized. The cost-benefits and reliability of the information obtained through satellite observations are evaluated. We also assess the usefulness of satellite sea surface temperature (SST) data in monitoring marine heat waves in sensitive habitats. For this we use observations from Sentinel-3 SLSTR and Landsat-8/9 TIRS and TIRS-2. TIRS instruments provide data at sub-kilometer resolution which is particularly important in shallow and complex coastal areas. Additionally, the Finnish Coastal Nutrient Load Model (FICOS) is used to assess how nutrient concentrations change in sensitive areas under different hydrodynamic and environmental conditions. The project is being carried out in collaboration with the Finnish Environment Institute (Syke), the Natural Resources Institute Finland (Luke), and the Finnish Meteorological Institute (FMI). The project started in the summer of 2024 and the study will be completed by the end of 2025. Co-funded by the European Union.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Leveraging Earth Observation for Phytoplankton Biodiversity Monitoring: The Role of Sentinel-3 OLCI in Supporting MSFD PH1 Indicator and Regional Reporting

Authors: Antoine Mangin, Marine Bretagnon, Philippe Bryère, Anne Goffart
Affiliations: ACRI-ST, ACRI-ST, Site de Brest, quai de la douane, Oceanology, University of Liège
The PH1 indicator of the Marine Strategy Framework Directive (MSFD) is a key descriptor aimed at assessing changes in phytoplankton and zooplankton communities, which are fundamental components of marine ecosystems. This indicator measures relative changes in abundances or biomasses of lifeform pairs based on functional traits to indicate ecological change (Tett et al., 2008) in response to anthropogenic pressures such as eutrophication, pollution, or climate change. In the Mediterranean, the PH1-Phytoplankton indicator is particularly relevant for monitoring the responses of phytoplankton communities in an oligotrophic environment (nutrient-poor) subject to high seasonal and interannual variability. It relies on parameters such as phytoplankton functional groups and types. These observations can be collected through in situ surveys or satellite-based estimates, providing broader spatiotemporal coverage. Phytoplankton Functional Types (PFT) can be inferred from ocean colour reflectances by analysing the light spectrum reflected by the ocean surface. Different phytoplankton groups have distinct bio-optical properties due to variations in their pigment composition, size, and structure, which influence how they absorb and scatter light. By using advanced algorithms and models that link specific spectral signatures to phytoplankton groups, satellite sensors can estimate the relative abundance or dominance of PFT. This method provides a valuable, large-scale approach to understanding phytoplankton community composition and its role in marine ecosystems. In this study, we will present the methodology to infer the PFT community composition thanks to machine learning approach, using Sentinel-3/OLCI reflectances. This algorithm will be then applied to different study sites off Corsica, in the Mediterranean Sea, where historical in situ data are available and allow the accuracy assessment of satellite estimates. The algorithm will be applied to the whole timeseries to discuss spatial and temporal evolution of the main phytoplankton groups. Then we will demonstrate how these satellite-derived estimates can contribute to assess the Environmental Status of Pelagic Habitats.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping the Areal Extent of Perennial Brown Macroalgae Dominated Habitats in Low Transparency Baltic Sea Waters With Sentinel-2 Satellite

Authors: Ele Vahtmäe, Laura Argus, Kaire Toming, Antonia Nyström Sandman, Tiit Kutser
Affiliations: University Of Tartu, AquaBiota
Coastal ecosystems provide numerous critical ecosystem functions and services such as habitat and food for marine organisms, protection from storm and erosion, nutrient recycling, sediment trapping, carbon storing etc. Despite the benefits that coastal ecosystems have, they are under the severe threat due to human activities, such as land use changes, anthropogenic disturbances, pollution, eutrophication and climate change effects conditioned by the burning of fossil fuels. Loss or degradation of such highly valuable ecosystems results in the losses of biodiversity and critical ecosystem services. Perennial brown macroalgae Fucus spp. belts play a vital role in providing a wide range of ecosystem services in the Baltic Sea. Fucus vesiculosus is also considered as one of the key species to indicate the effect of eutrophication in the Baltic Sea. Monitoring given coastal ecosystems allows to estimate the state of benthic communities, provide evidence for environmental changes and establish required management methods. Integrating satellite imagery with in situ observations, holds great potential for enhancing the scope of benthic ecosystem monitoring. Remote sensing allows to assess spatial distribution of benthic macroalgae to a much larger spatial scale than what is done by using only point based sampling. As such, remote sensing-based method holds a promise to develop new spatial extent indicators for benthic biodiversity assessment. Remote sensing has already shown its effectiveness to estimate the spatial distribution and areal extent of various benthic habitats. However, to use satellite imagery effectively for regular macroalgae monitoring in low transparency waters of the Baltic Sea, it is essential to understand the level of classification accuracy and to ensure consistency in mapping results. In the current study we are using Sentinel-2 satellite data to map the areal extent of perennial brown macroalgae dominated habitats in the Estonian and Swedish test sites in the Baltic Sea. Ground truth data from the University of Stockholm and University of Tartu are used for the calibration and validation of classification algorithms. High quality (cloud free, low turbidity) Sentinel-2 images from the years 2016-2023 are used to determine the occurrence frequency of brown macroalgae in multitemporal images to provide confidence in brown macroalgae presence. The use of multitemporal images also allows to assess the uncertainty in brown macroalgae areal extent retrievals, which is not achievable if only single image is used for mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Combining open-access SAR and multispectral images with contextual environmental information to improve oil-spill detection in the Persian/Arabian Gulf

Authors: Alexis Culot, Dr. Qiang Wang, Pr. Emmanuel Hanert
Affiliations: Earth and Life Institute (ELI), UCLouvain, Royal Belgian Institute of Natural Sciences (RBINS), Institute of Mechanics, Materials and Civil Engineering (IMMC), UCLouvain
In a context where oil exploration, transport, and processing have significantly increased to meet an ever-evolving energy demand, the risk of marine pollution by oil has intensified. Beyond environmental disasters, these oil spills also threaten coastal infrastructure, disrupting maritime traffic, clogging desalination plant intakes, and harming fishing, aquaculture and key maritime ecosystems. This is particularly true for the Persian/Arabian Gulf, one of the most oil-polluted seas in the world. It is crossed by approximately 25,000 oil tanker movements each year and hosts 34 large oil fields operated by 800 wells, as well as 25 major oil terminals. On average, 260,000 tons of oil are spilled—accidentally or intentionally—into the Gulf each year. To mitigate the consequences of this pollution, it is necessary to better understand the risks and establish an early detection system for oil spills. While orbital synthetic aperture radar (SAR) images are effective for monitoring vast areas, their periodicity is not always sufficient for a rapid response system. It is therefore essential to complement them with other types of sensors to improve their spatial and temporal resolution. Moreover, oil spill detection in the Gulf is exceptionally challenging due to environmental and technical complexities, including widespread look-alike phenomena such as algal blooms, low wind zones, ocean currents as well as significant radio frequency interference (RFI) in radar acquisitions. A tailored oil spill monitoring system is therefore needed to address these limitations while providing reliable and timely information. Here, we seek to detect oil spill considering an approach that combines a hierarchical split based algorithm (HSBA) with feature extraction to analyse SAR and multispectral remote sensing data while incorporating additional geospatial and environmental data. To that use, we use a variety of sensors to combine their respective advantages and develop a comprehensive and versatile oil spill detection methodology. SAR images are particularly effective for detecting oil spills due to their ability to differentiate between the roughness of the sea surface and that of pollutants, regardless of light or weather conditions. We hence consider Sentinel-1 and Radarsat data. However, the exclusive use of SAR images in an open-source system presents limitations, particularly in terms of revisit frequency. We are therefore supplementing SAR data with multispectral images from Sentinel-2 and Landsat using the oil spill index (OSI). SAR and multispectral data are first analysed with HSBA method to highlight dark areas, hence possible oil spills, while minimizing the detection of look-alike. HSBA is leveraging the statistical distribution of pixel intensity values. This algorithm identifies regions in an image where two distinct normal populations coexist. These regions are then used to parameterize a region-growing method, specifically a flood algorithm, which is subsequently applied to the entire image enabling it to highlight anomalies that stand out from the water surface, such as oil spills. This step isolates dark objects across the image, ensuring accurate segmentation of potential oil spills, which typically appear as dark patches in SAR imagery, while minimizing look-alike detection. An additional feature extraction and classification step is used to discard any remaining look-alikes not filtered out by the HSBA. In this step, radiometric, geometric, and texture features are extracted from the detected dark objects and used in conjunction with a rule-based classification approach. Some of the features include the number of dark objects (NDO), the standard deviation of dark object intensities (StdDO), the object power-to-mean ratio (OPMR), and the ratio of NDO to the number of pixels in the chip (NDO/NPC). The rule-based classification is trained on data from confirmed oil spills in the Gulf as well as in other regions worldwide, such as the Singapore Strait, the Mediterranean Sea, and the Gulf of Mexico. This broader dataset is essential due to the limited availability of comprehensive oil spill datasets from the Gulf. We have also enhanced the oil spill detection algorithm by incorporating contextual information about environmental conditions at the time of detection. This additional layer helps to further filter out look-alikes by integrating data on surface winds, chlorophyll content, presence of underwater structures such as coral reefs and sandbanks, and a Radio Frequency Interference (RFI) probability map. Furthermore, a map of oil platform locations has been included to provide additional context, helping the algorithm make more informed decisions. By leveraging this contextual information, the algorithm achieves a higher level of accuracy in distinguishing true oil spills from false positives. We validated our method using more than 50 historical oil spills, including events in the Gulf as well as other regions worldwide, such as the Singapore Strait and Java Sea, the Mediterranean Sea, the Gulf of Mexico, Mauritius, Trinidad and Tobago, Venezuela, and the Philippines. Additionally, the algorithm will be applied to time series of Sentinel-1, Sentinel-2, Radarsat and Landsat images from 2024 over the Gulf to assess the frequency of oil spills and illegal discharges occurring in the region. The implications of this work will extend beyond oil spill detection, as the results will be integrated with high-resolution hydrodynamic and oil spill dispersal simulations in the Gulf to forecast dispersion of detected oil spills. By incorporating regional currents, winds and waves, these simulations illustrate how such a system can serve as the backbone of an operational oil spill early warning system in the Gulf. Furthermore, the detection algorithm relies exclusively on open-access datasets, ensuring that the approach can be quickly and effectively implemented in other regions worldwide, providing a scalable, open-source solution for global oil spill monitoring and response systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Offshore Environmental Light Pollution in the UK Exclusive Economic Zone

Authors: Scotia Kaczor, Sara McGourty, Austin Capsey
Affiliations: UK Hydrographic Office
Artificial Light at Night (ALAN) is a growing but underappreciated environmental stressor in marine ecosystems within the UK Exclusive Economic Zone (EEZ). While ALAN's impacts on terrestrial and urban environments are well-documented (Mu et al 2021; Jiang et la 2018; Jiang et al 2017), its consequences for marine systems have gained significant attention only recently (Elvidge et al 2024; Zeng et al 2023; Smyth et la 2022; Zhao et al 2021). ALAN disrupts natural light regimes that govern essential ecological and biological processes, including reproduction, foraging, migration, and predator-prey interactions. These disruptions threaten individual species and ecosystems, particularly in habitats finely tuned to natural light cycles, such as those governed by lunar phases and daily light spectra alterations (Marangoni et al 2022). This study integrates emerging research on ALAN's effects in marine environments with a specific focus on temporal variations and evolving trends within the UK EEZ. Artificial light from offshore infrastructure (petrochemical platforms and windfarms), vessels, coastal urbanisation, and shipping routes contributes significantly to light pollution in marine areas (Elvidge et al 2024; Polinov et al 2022). ALAN impacts various marine species, including seabirds, cetaceans, turtles, fish, and zooplankton, altering critical behaviours like migration, navigation, and Diel Vertical Migration (DVM) (Marangoni et al 2022). These changes disrupt food webs and nutrient cycles and interact with other anthropogenic factors which ultimately affect marine biodiversity and ecosystem services (Tidau et al 2021; Gaston et al 2021). Radiance data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) sensor aboard the joint NASA/NOAA Suomi National Polar-orbiting Partnership was utilised for this study (Suomi NPP) (Nurbandi et al 2016; Baugh et al 2013). Monthly and annual composite images of radiance were analysed across the UK EEZ, with a focus on identifying temporal patterns and trends within three separate regions with different offshore activity. A threshold of 0.8 nanoWatts/sr/cm^2 was set to mitigate implications of background noise and sensor degradation. This threshold was based on results from a calibration area distant from major offshore infrastructure. Annual radiance composites from 2014 to 2023 revealed a 40% decrease in mean radiance within the UK EEZ, with overall mean radiance declining from 5.0 to 3.0 nanoWatts/sr/cm^2, and reduced variability over time. In the NW Continental Shelf and South-Western Approaches regions, radiance levels are below the calibration threshold of 0.8 nanoWatts/sr/cm^2, while in the Southern North Sea, mean radiance declined by 30%, from 2.89 to 2.00 nanoWatts/sr/cm^2. Cross-referencing VIIRS DNB datasets on persistent offshore infrastructure, ship anchorages and Marine Protected Areas (MPAs) revealed a strong correlation between bright radiance zones and offshore infrastructure. Highly illuminated regions often overlapped with MPAs, raising concerns about the impacts of artificial light at night (ALAN) on sensitive marine ecosystems. Fluctuations in offshore activities and the resulting radiance levels may stem from economic, regulatory, and technological factors. Economic shifts, such as changes in oil prices, directly impact drilling and production activities. Regulatory changes, particularly in environmental policies, can restrict certain offshore operations, potentially reducing radiance levels. Meanwhile, technological advancements influence activity levels: for instance, innovations in offshore wind technology may increase installation and maintenance operations, thereby affecting radiance data. Together, these factors may be contributing to the observed variability in offshore radiance and activity in UK EEZ. The study’s findings underscore the potential risks posed by ALAN to MPAs in the UK EEZ, artificial lighting could disrupt species behaviour and ecosystem functions, compromising conservation goals. The evidence suggests that artificial light is encroaching into dark marine spaces, including MPAs, possibly threatening the integrity of these critical conservation areas. While the observed downward trend in radiance from 2014 to 2023 suggests progress, particularly for MPA preservation, the study emphasises the need for long-term evaluation. Developing guidelines to mitigate light pollution in ecologically sensitive marine habitats should be incorporated into future regulatory frameworks. Mitigating the effects of ALAN on marine ecosystems is likely to be complex but necessary. Recommendations from previous research include reducing light intensity, adopting specific wavelengths less harmful to marine species, and designating "Marine Dark Sky Parks" within MPAs to restrict light pollution (Davies et al 2016). The establishment of a foundation record of artificial light levels, as demonstrated in this study, is crucial for informing future policy aimed at safeguarding marine biodiversity. As the SUOMI NPP satellite approaches the end of its operational life, a new constellation of satellites will take over, ensuring the continuity of this critical monitoring. This transition is expected to maintain current observational standards but also creates opportunities for other satellite missions to develop enhanced spatial and spectral resolution sensors, advancing the effectiveness of ALAN monitoring. Stricter regulatory frameworks based on robust foundation data are essential to manage the encroachment of artificial lighting, particularly in sensitive and protected areas. In conclusion, ALAN can pose a significant threat to marine ecosystems within the UK EEZ, especially in areas of high ecological value such as MPAs. By integrating remote sensing technologies with ecological and nautical chart data, this study provides critical insights into the spatial and temporal dynamics of marine light pollution. This evidence could be essential for shaping future marine conservation policies and ensuring the long-term sustainability of the UK's offshore environments. Baugh, K., Hsu, F.C., Elvidge, C.D. and Zhizhin, M., 2013. Nighttime lights compositing using the VIIRS day-night band: Preliminary results. Proceedings of the Asia-Pacific Advanced Network, 35(0), pp.70-86. Davies, T.W., Duffy, J.P., Bennie, J. and Gaston, K.J., 2014. The nature, extent, and ecological implications of marine light pollution. Frontiers in Ecology and the Environment, 12(6), pp.347-355 Davies, T.W., Duffy, J.P., Bennie, J. and Gaston, K.J., 2016. Stemming the tide of light pollution encroaching into marine protected areas. Conservation Letters, 9(3), pp.164-171. Depledge, M.H., Godard-Codding, C.A. and Bowen, R.E., 2010. Light pollution in the sea. Marine pollution bulletin, 60(9), pp.1383-1385. Elvidge, C.D., Ghosh, T., Chatterjee, N., Zhizhin, M., Sutton, P.C. and Bazilian, M., 2024. A Comprehensive Global Mapping of Offshore Lighting. Earth System Science Data Discussions, 2024, pp.1-34. Gaston, K.J., Ackermann, S., Bennie, J., Cox, D.T., Phillips, B.B., Sánchez de Miguel, A. and Sanders, D., 2021. Pervasiveness of biological impacts of artificial light at night. Integrative and Comparative Biology, 61(3), pp.1098-1110. Jiang, W., He, G., Long, T., Guo, H., Yin, R., Leng, W., Liu, H. and Wang, G., 2018. Potentiality of using Luojia 1-01 nighttime light imagery to investigate artificial light pollution. Sensors, 18(9), p.2900. Jiang, W., He, G., Long, T., Wang, C., Ni, Y. and Ma, R., 2017. Assessing light pollution in China based on nighttime light imagery. Remote Sensing, 9(2), p.135. Marangoni, L.F., Davies, T., Smyth, T., Rodríguez, A., Hamann, M., Duarte, C., Pendoley, K., Berge, J., Maggi, E. and Levy, O., 2022. Impacts of artificial light at night in marine ecosystems—A review. Global Change Biology, 28(18), pp.5346-5367 Mu, H., Li, X., Du, X., Huang, J., Su, W., Hu, T., Wen, Y., Yin, P., Han, Y. and Xue, F., 2021. Evaluation of light pollution in global protected areas from 1992 to 2018. Remote Sensing, 13(9), p.1849. Nurbandi, W., Yusuf, F.R., Prasetya, R. and Afrizal, M.D., 2016, November. Using visible infrared imaging radiometer suite (VIIRS) imagery to identify and analyze light pollution. In IOP Conference Series: Earth and Environmental Science (Vol. 47, No. 1, p. 012040). IOP Publishing. Polinov, S., Bookman, R. and Levin, N., 2022. A global assessment of night lights as an indicator for shipping activity in anchorage areas. Remote Sensing, 14(5), p.1079. Smyth, T.J., Wright, A.E., Edwards-Jones, A., Mckee, D., Queirós, A., Rendon, O., Tidau, S. and Davies, T.W., 2022. Disruption of marine habitats by artificial light at night from global coastal megacities. Elem Sci Anth, 10(1), p.00042. Tidau, S., Smyth, T., McKee, D., Wiedenmann, J., D’Angelo, C., Wilcockson, D., Ellison, A., Grimmer, A.J., Jenkins, S.R., Widdicombe, S. and Queirós, A.M., 2021. Marine artificial light at night: An empirical and technical guide. Methods in Ecology and Evolution, 12(9), pp.1588-1601. Zeng, H., Jia, M., Zhang, R., Wang, Z., Mao, D., Ren, C. and Zhao, C., 2023. Monitoring the light pollution changes of China’s mangrove forests from 1992-2020 using nighttime light data. Frontiers in Marine Science, 10, p.1187702. Zhao, X., Li, D., Li, X., Zhao, L. and Wu, C., 2018. Spatial and seasonal patterns of night-time lights in global ocean derived from VIIRS DNB images. International journal of remote sensing, 39(22), pp.8151-8181.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Validation of Marine Debris Modelling Using Monitoring of Surfactants in the Black Sea Using Radar Remote Sensing

Authors: Dr Morgan Simpson, Armando Marino, Evangelos Spyrakos, Ms Violeta Slabakova, Professor Andrew Tyler
Affiliations: University Of Stirling, Institute of Oceanology - Bulgarian Academy of Sciences (IOBAS)
Plastic pollution is a pervasive threat to the environment, marine and human health. There are growing concerns of marine plastic pollutions impacts on these systems health. Plastics account for approximately 60 – 95% of global ocean marine litter making it the most common type of marine debris. However, even with an abundance of plastic litter within our marine environments, the movement and accumulation of plastic debris is not well mapped. Remote sensing techniques have been explored for aiding the monitoring of marine plastic pollution. Remote sensing has already been successfully employed for the coverage of multiple marine phenomena and processes, such as coastal currents, sea surface temperature, chlorophyll-a monitoring, oil spills and much more. Both passive and active systems have been investigated for their capabilities in monitoring marine litter, due to their observations providing information on a global scope, with continuous temporal coverage and the ability to coincide with in-situ ground measurements. Recently, the use of radar has seen a growth in its capabilities for monitoring marine plastic pollution. Whether that be through the marine litter itself, or through the use of surfactant proxies. While detection from space is possible, detection requires large accumulations or targets to be present when using freely available, coarser resolution imagery. Therefore, with current available technologies, utilising surfactants may prove to be the best form of action for monitoring marine plastic pollution and validating models of marine litter. A number of models have been produced in attempts to understand plastic litter movements and volume within global oceans. This includes studies on both observations and numerical models. Access to observational data has increased over recent years, however, some regions have only recently sufficient attention recently. One of these regions is the Black Sea, where marine litter observations and modelling was lacking until recently. Particle density per km2 within the Black Sea has been shown to be three times higher than the average values observed within the Mediterranean Sea. This study aims to validate models of the Black Sea by monitoring surfactant slicks within the Sea via Sentinel-1 Synthetic Aperture Radar imagery. The dataset created from this study provides validation for two models which hypothesise where accumulation zones occur throughout the Black Sea. Study Zone: The extent of the Black Seas drainage basin comprises six riparian countries (Romania, Ukraine, Russia, Turkey, Georgia and Bulgaria), and almost one third of the entire land area of continental Europe draining to it. Despite being a vital tourism attraction, fishery area and maritime route, the Black Sea lacks the necessary attention regarding marine litter pollution. The models: This study utilised predictions on where accumulation zones of marine litter would appear within the Black Sea, as well as where void zones of marine litter would appear. Castro-Rosero et al. (2023) and Stanev & Ricker (2019), both found that the southwest coast of the Black Sea has exhibited high density of floating marine litter in modelling scenarios. This has been attributed to a number of factors, be it: Ekman and Stokes drift patterns and the potential of significant input of floating marine litters from the Danube River. Both studies also show similar results in that the eastern and north-eastern areas of the Black Sea, accumulation of floating marine litters is less commonly found than in the western side of the body. Castro-Rosero et al (2023) identified a high accumulation point between Georgia and Turkey, which is also agreed from findings within Stanev and Ricker (2019) and Miladinova et al (2020), where all three studies found an existence of high concentration zones along the southeast coast. Due to the findings in the above mentioned studies, accumulation points within the black sea were determined. The European Space Agency’s Sentinel-1 Synthetic Aperture Radar satellite was exploited for this study. The mode of acquisition was Interferometric Wide Swath (IW) Ground Range Detected (GRD), with a spatial resolution of 20m and temporal resolution of up to 6 days. Across all 13 combined maximum and minimum accumulation zone sites, every Sentinel-1 image from 2017 was utilised. All images were visually assessed for dark stripe features that are apparent when surfactants are within a SAR scene. These stripes are visible due to floating surfactants causing a dampening of the short gravity-capillary waves that are responsible for radar Bragg-scattering. Bragg-scattering has a minimum wind speed threshold of occurrence, which lies between 2 – 3 ms-1. This wind speed, or higher, is required for Bragg waves to be generated. In addition to this, oil films become undetectable at wind speeds between 10 – 14 ms-1 due to their incorporation with the underlying waters from breaking waves. Copernicus / European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis V5 (ERA5) data was utilised to gain wind speed values of the accumulation zone areas of interest. The wind speed was averaged across the full 50 x 50km segments of SAR imagery, so that the full weather condition could be accounted for within the cell. The wind speed value taken was that of the nearest hour to the acquisition window. For example, an image acquired at 3:48am would have the wind speed values at 4:00am associated with it. Chlorophyll-a data was taken from the Copernicus Marine Environment Monitoring Service (CMEMS), where the Black Sea Bio-Geo-Chemical L4 climatology satellite observations were used to determine chl-a concentrations. Similarly to the wind speeds, the chl-a values were averaged across the full 50 x 50km segments of SAR imagery so that the values of the full cell could be taken into account. Floating Marine Litter (FML) monitoring survey was carried out in the period 2-18 June 2024 onboard the R/V Mare Nigrum during a H2020 DOOORS project cruise in the Black Sea. Visual observations of FML (> 2.5 cm) were performed following the protocol proposed by the MSFD TG10 Guidance on Monitoring of Marine Litter in European Seas. The surface litter was assessed based on fixed-width strip transect method. Observations were carried out at speed around 7.7 (± 0.8) knots, approximately 6 m above sea level. FML was recorded from the bow of the vessels by two observers covering each side of the vessel, within 7.5 m observation strip on each side of the vessels. The length of each transect was measured according to start-end geographic coordinates recorded by portable GPS. In order to standardize the survey effort, the duration of observations was fixed to 30 min corresponding to the mean transect length of 7.4±1.72 km. All transects were observed under low wind speed conditions (≤5 m/s) recorded with a portable anemometer and good visibility. Over the entire cruise period, a total of 33 transects were performed. A distance of 244 km was covered corresponding to 16 h of observations. Across all Sentinel-1 images in 2017, there were 324 instances of surfactants visible in maximum accumulation zones, and 147 instances within the minimum accumulation zones, with surfactants visible in double the amount of images in maximum zones compared with minimal zones. The outcomes from the FML concentrations coincide with the findings of Castro-Rosero et al., (2023) and Stanev & Ricker (2019). Where we can see from the observations that the eastern region near the Georgian coast is a high-density FML area. Another agreement between observational data and the model data is that floating marine litters are less commonly found in the eastern area of the Black Sea compared with the western side of the body. Percentage shares of FML major categories per transects are also presented, the most abundant category is Plastics representing 86% of the overall litter quantities detected in the region, followed by 7.6% share of Paper/Cardboards category and 2.4% accounted for Metal. Statistical analysis is also undertaken on the wind speed and chlorophyll-a data to determine viewing conditions and nature of surfactants present within the SAR imagery. Implications for the health of the Black Sea is also discussed regarding marine litter pollution and surfactant presence within the waters.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Spatio-Temporal dynamics of phytoplankton in the Ross Sea (Antarctica)

Authors: Graça Sofia Nunes, Ana C. Brito, Afonso Ferreira
Affiliations: MARE - Marine and Environmental Sciences Centre/ARNET—Aquatic Research Network, Faculdade de Ciências, Universidade de Lisboa, Departamento de Biologia Vegetal, Faculdade de Ciências, Universidade de Lisboa
The Southern Ocean, while being one of the most remote oceans on the planet, plays a fundamental role in the global climate system. The Ross Sea, located in the Pacific sector of the Southern Ocean, is considered a Marine Protected Area, incorporating multiple marine subsystems, including polynyas (areas of open water surrounded by sea ice) and marginal ice zones in offshore and coastal areas. This region is highly influenced by katabatic winds i.e. high-density descend winds along the southern and western sides of the Ross Ice Shelf and the coast of Victoria. These features, coupled with the region’s high variability, have an important influence on sea ice coverage, currents, and nutrient availability, affecting the region's phytoplankton communities. Phytoplankton blooms are pivotal in the carbon cycle of the Ross Sea and support a rich biodiversity, including krill, fish, penguins, seals, and whales. However, studies in the polar regions face logistical and weather constraints for in-situ data collection. Remote sensing has now emerged to substitute in-situ data with large-scale temporal and spatial data, enabling a better understanding of phytoplankton dynamics and their ecological and biogeochemical importance to protect ocean health. Satellite remote sensing and ocean modelling data were compiled to obtain a long-term dataset spanning 25 years (1998–2022) for the Ross Sea region. This study intends to fill a large knowledge gap by better understanding the environmental drivers of phytoplankton biomass and bloom phenology. Unlike the majority of past research, which often focused on analysing specific years or areas, the extensive spatial and temporal coverage of this study will provide key insights that will help us better understand this ecologically important region. The primary goal of this study was to further understand how phytoplankton biomass in the Ross Sea has changed in the past decades, using remote sensing data from 1998 to 2022 (nearly 25 years’ worth of data), being the first long-term contribution. To this end, several specific objectives were established: (i) investigate the spatial-temporal variation of chlorophyll-a concentrations; (ii) analyse phytoplankton bloom phenology changes over the 25 years; (iii) assess how the abiotic parameters influenced chlorophyll-a variability. The variables used in this study were chlorophyll-a (chl-a; as a proxy of phytoplankton biomass), sea ice coverage, currents, wind, mixed layer depth, and salinity. Given the large and dynamic nature of the Ross Sea, the area was divided into three phenoregions (zones with coherent phenological patterns) using a hierarchical clustering analysis based on chl-a, the yearly number of ice-free days, and index of the reproducibility of the annual seasonal cycle of chl-a (SCR). Random forest models were then conducted for each phenoregion to identify the abiotic factors most strongly influencing each phenological metric. Phenological indicators, including Bloom Start (Week of the Year of the star of the main bloom in the cycle), Bloom End (Week of the Year of the end of the main bloom in the cycle), Bloom Duration (Duration of the main bloom in the cycle) and Bloom Area (Biomass accumulated over the main bloom) were computed to assess phytoplankton bloom phenology between 1998 and 2022, from September to April. In addition, to evaluate variability the SCR was determined. This index analyses the similarities between the different growing cycles compared to the growing cycle average. To evaluate the trend of the chl-a concentration and timings a pixel-by-pixel trend analysis was performed for chl-a and multiple phenological metrics. We observed that the most oceanic phenoregion (around the offshore area until around 65ºS) is the least productive. Phytoplankton blooms were observed to start earlier, around October-November, and last for longer periods (twelve weeks). This phenoregion is mainly influenced by wind and ocean currents, which is related to its proximity to the Ross Sea gyre. In the southernmost, coastal region (area around 70 ºS until the coastal line), blooms start later, around November-December, and have a shorter duration (eight weeks), yet exhibit the highest biomass. This phenoregions is greatly influenced by sea ice cover and wind, since the extension of the Ross Sea Polynya will depend on wind direction and intensity, affecting ice coverage. Finally, the intermediate region (located between the previous two) shows less similarity between annual cycles and its average seasonal cycle, i.e. it is characterised by a lower SCR. This was the most dynamic phenoregion, where phytoplankton blooms have an average duration of nine weeks, starting in December-January. Phytoplankton blooms in this region seem to be mainly influenced by sea ice cover and ocean currents. Similar to the coastal region, sea ice coverage is an important element in shaping bloom phenology since in years with less ice cover blooms tend to start earlier. Currents are also important because they affect the distribution of nutrients and sea ice, which can affect the duration of the bloom. Our findings highlight how the oceanographic complexity of the Ross Sea shapes phytoplankton dynamics, emphasizing the importance of accounting for spatial heterogeneity when studying primary productivity in this region. We observed distinct regional patterns in chl-a variability, bloom phenology, and abiotic drivers throughout the Ross Sea, as well as increasing long-term trends in biomass in open waters. These results point out the importance of using long-term, high-resolution studies and multidisciplinary approaches in predicting the impacts of climate change on Antarctic marine ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Relationships Between Shelf-sea Fronts and Biodiversity Revealed Using Earth Observation Data Improve Planning of Offshore Renewable Developments

Authors: Peter Ian Miller, Emma Sullivan, Beth Scott, James Waggitt, Will Schneider, Deon Roos, Andrey Kurekin, Georgina Hunt, Graham Quartly, Juliane Wihsgott, Morgane Declerck, Elin Meek
Affiliations: Plymouth Marine Laboratory, University of Aberdeen, University of Bangor
Fronts – the interface between water masses – are hotspots for rich and diverse marine life, influencing the foraging distribution of many megafauna. We have analysed a long time-series of Earth observation (EO) data using novel algorithms to characterise the distribution and dynamic of ocean fronts, and used these to investigate links to biodiversity hotspots and to explore key drivers for changes in fronts and these relationships. This multi-sensor study comprises front detection using both sea-surface temperature and ocean colour products, and internal wave detection from synthetic aperture radar (SAR). The synergy of several EO modalities allows us to address the complex relationships between stressors, oceanography and biodiversity. For example, higher resolution (300m) fronts detected using the Sentinel-3 OLCI ocean colour sensor enables estimation of coastal biodiversity, complementing the thermal fronts (1km) more suited to shelf-sea regions. FRONTWARD (Fronts for Marine Wildlife Assessment for Renewable Developments) aims to provide evidence to justify the inclusion of frontal locations in marine spatial planning for the UK, most pressingly for zones for offshore windfarms. In this multi-disciplinary research, biodiversity hotspots are identified using a biodiversity index, created using an unprecedented collation of at-sea observations of seabirds, fish and cetaceans spanning several decades (1980s-2020s). Generalised additive models (GAMs) reveal the spatial influence of fronts and internal waves on biodiversity, and provide predictions of taxonomic diversity and distribution based on EO-detected front maps. The outcomes from this project will feed into the evidence base for marine conservation, and decisions on siting and consenting of future offshore renewable energy projects that minimise disturbance of ecosystems while expediting the transition to net zero. To achieve these outcomes we are working with multiple implementation teams responsible for marine spatial planning at The Crown Estate, the organisation that leases UK marine zones to offshore developers, and also the ongoing and pending UK research programmes studying ecosystem effects of fixed and floating wind farms (ECOWind and ECOFLOW).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: REWRITE project - Rewilding and Restoration of Intertidal Sediment Ecosystems for Carbon Sequestration, Climate Adaptation and Biodiversity Support

Authors:
Affiliations: Nantes Université
The climate and biodiversity crisis are major global challenges of the 21st century. The driving force is the well documented escalation of atmospheric greenhouse gas (GHG) concentrations, especially carbon dioxide (CO2), due to the disruption of biogeochemical and energy cycles caused by human activity. Consequently, the last decade 2011-2020 was around 1.1°C warmer than those of the last century (1850-1900). We are already observing profound changes, including biodiversity loss, more frequent or extreme weather events, as well as sea level rise. Within European coastal zones, intertidal areas consisting of soft sediment and emerging during each low tide, form seascapes covering more than 10 000 km2 along the 35 000 km of the tidal coastline. Their three key habitats constituted by seagrass meadows, salt marshes and mudflat inhabited by photosynthetic biofilm (i.e microphytobenthos), provide multiple ecosystem services with a great potential to cope with the biodiversity-climate crisis and thus to contribute to a number of UN and EU priorities regarding carbon neutrality, climate resilience, biodiversity support and social equity. Nevertheless, an alarming situation has emerged over recent years: these seascapes continue to disappear, to be fragmented and to be polluted, resulting in a decrease of their provision of goods and ecosystem services. REWRITE ambition is to expand innovative approaches and nature-based solutions for rewilding seascapes constituted by intertidal soft sediment, bridging biodiversity conservation, climate adaptation and social expectations and uses. To reach this main goal, three key challenges will be addressed: i) Reducing the uncertainty of the future trajectories of intertidal soft sediment seascapes. Due to a fragmented knowledge on their ecological and social functioning, the scientific community is currently unable to project accurately their trajectories by 2050. A deep understanding of the different restoration (active), rewilding (passive) and “do nothing” options, compared to a ‘business-as usual’ option in the context of erratic and constant changes is urgently needed. ii) Assessing the cascading effect. Understanding the propagation of the effect of the increase of CO2, temperature, sea level rise, extreme events and the loss of biodiversity from the local to the global scale is a key factor to enhance local natural capital for a resilient European shoreline. iii) Assessing how society engages to agree upon and / or overcome the trade-offs of rewilding, considering environmental benefits and societal pressures. Identifying the social and cultural drivers and barriers is crucial to ensure local and national engagement and support, and place-based decisions responsive to local needs, particularly where space requirements for rewilding are a source of conflict. We will develop and use innovative tools and techniques to rapidly quantify and map the ecosystem services supply, by coupling remote sensing images with modelling approach, field campaigns and stakeholder engagement. To address the challenges related to the upscaling due to the complexity of these seascapes (e.g. mixing and patchiness), we will develop an integrated pathway for a “step-by-step” upscaling, where each step will allow to validate the next one. We will use images from existing Copernicus Sentinel archives (since 2015), as well as images acquired during the dedicated synchronous and co-located field campaigns at various scales: laboratory, ground, drone, airborne and satellite with the objective to map biodiversity and its state of conservation, C-sequestration, protection ability from coastal flooding, seascape connectivity and fragmentation and cultural ecosystem services. This “step-by-step” upscaling approach will be lead synchronously with social innovation to recognize the plural values of nature using multi-methods approach integrating assessment of cultural values through a combination of a) participatory processes with stakeholders at varied scales (local to European), b) focus groups and interviews with stakeholder of varying power-interest relations, c) a social media approach to capture the large scale perception and d) historical information from peer reviewed and grey literature. This assessment has a dual goal: i) to quantify and map the plural cultural benefits supplied by the seascapes systems and ii) to raise awareness about these values to different stakeholder categories that will be involved in this cultural values assessment and ensure that the project outputs have societal relevance and promote societal engagement. This integration of bottom-up and top-down approach will allow for inclusion and transformation and bring research closer to society, setting the basis for the desired transformative change. REWRITE ambition is served by an impressively interdisciplinary consortium (25 partners from the academic and private sectors, representing 8 European tidal coastal states, as well as the UK, Canada and the USA) with recognized expertise on the climate-biodiversity nexus, fostering synergies among disciplines such as Social Sciences and Humanities, Natural Sciences and resources, and ecosystems management. To reach this ambition, the strength of REWRITE is the “space for time” approach based on 10 demonstrators from Northern to Southern Europe, and from North America to Europe, illustrating a wide panel of environmental constraints, societal uses, coastal management and stakeholder’s engagement. Coupling remote sensing, modelling and ground-truthing approaches, REWRITE will perform a joint analysis from natural and social sciences to understand the historical and current trajectories of ISS functioning and project the future. This approach offers a strong basis to co-develop robust scenarios using multivariable constraints, including plural and integrated (i.e. environmental, economic and societal) cost valuations, in order to select the best and low-cost options to rewild a resilient European coastline.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Insights of the variability of optically active constituents and phytoplankton dynamic in the Northwestern Iberian Peninsula using ocean colour inversion model

Authors: Amalia Maria Sacilotto Detoni, Xosé Antonio Padín, Gabriel Navarro, Natalia Rudorff Oliveira, Maria Laura Zoffoli, Antón Velo, Isabel Caballero
Affiliations: Instituto de Investigaciones Marinas, Consejo Superior de Investigaciones Científicas (IIM-CSIC), Instituto de Ciencias Marinas de Andalucía, Consejo Superior de Investigaciones Científicas (ICMAN-CSIC), Instituto Nacional de Pesquisas Espaciais (INPE), Consiglio Nazionale dele Ricerche, Istituto di Scienze Marine (CNR-ISMAR)
Spain leads global mussel aquaculture, with the Galician Rías Baixas estuaries at its core, emphasizing the region's socio-economic importance. Monitoring water quality, particularly phytoplankton dynamics, is critical as harmful algal blooms (HABs) increasingly threaten aquaculture productivity, causing significant economic losses. These challenges underscore the need for a comprehensive understanding of optically active constituents (OACs) and their spatiotemporal dynamics to assess estuarine trophic states and ecosystem responses to climate variability. However, the limited characterization of OACs in Northwestern Spain remains poorly understood, creating a gap in the calibration and validation of bio-optical algorithms, thus hindering efficient monitoring strategies and remote sensing efforts. This study addresses these gaps by analyzing OAC distributions, the contributions of four key phytoplankton groups, and their variability in the two rías in the northwestern Iberian Peninsula, which belong to the Rías Baixas. Nearly monthly sampling campaigns were conducted from September 2023 to October 2024 in two estuaries with distinct hydrodynamic characteristics: Ría de Arousa, a more sheltered estuary, and Ría de Vigo, characterized by dynamic circulation patterns. Surface water samples (~5 m depth) were collected to measure chlorophyll-a (Chl-a) concentrations. Above-water Remote Sensing Reflectance (Rrs) was extrapolated from irradiance (Ed) and radiance (Lu) data obtained using the PRR-800 profiling radiometer (Biospherical Inc.). Sampling was performed at nine fixed stations distributed across two estuaries with distinct circulation patterns and interactions with oceanic waters. The Water Colour Simulator (WASI) was applied to estimate inherent optical properties (IOPs) and subsequently derive Chl-a concentrations, as well as the relative contributions of four major phytoplankton groups potentially associated with harmful algal blooms: diatoms, dinoflagellates, cryptophytes, and cyanobacteria. Our results indicated that the WASI simulator performs notably well in deriving IOPs and phytoplankton groups contribution from in-situ above-water Rrs measurements in the waters of the Rías Vigo and Arousa. The findings highlight distinct spatial variability in OAC concentrations between the two estuaries, influenced by coastal morphology and internal water exchange dynamics. Ría de Arousa exhibited elevated concentrations of colored dissolved organic matter (CDOM) and detrital particles, suggesting that reduced circulation limits the occurrence of bloom events. In contrast, Ría de Vigo, characterized by higher water renewal rates, displayed lower detrital concentrations. Phytoplankton group dominance also varied significantly: Ría de Arousa showed a biomass gradient, with diatoms and dinoflagellates prevalent in the outer zones, while cryptophytes dominated the inner estuary. Conversely, Ría de Vigo exhibited a more uniform biomass distribution, with less pronounced variability among phytoplankton groups. These results highlight the importance of continued in-situ campaigns to enhance the calibration and validation of bio-optical models, improving the retrieval accuracy of IOPs from remote sensing data. By deepening our understanding of the interactions between OAC dynamics and environmental variables, this research contributes to advancing remote sensing capabilities and informing sustainable resource management strategies in aquaculture-dependent ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A first national seagrass map for Venezuela

Authors: Chengfa Benjamin Lee, Dr. Ana Carolina Peralta Brichtova, MAR ROCA, Dr. Tylar Murray, Oswaldo David Bolivar Rodriguez, Dr. Daniele Cerra, Dr. Frank E. Muller-Karger
Affiliations: German Aerospace Centre (DLR), Institute for Marine Remote Sensing, College of Marine Science, University of South Florida, Institute of Marine Sciences of Andalusia (ICMAN), Spanish National Research Council (CSIC), Institute of Advanced Studies IDEA, Energy and Environment Unit, German Aerospace Centre (DLR)
Coastal wetland habitats including seagrass meadows are important for their ecosystem services. Yet, there are still many knowledge gaps in the global map of seagrasses. Some countries do not even have a first baseline. One such country is Venezuela, which has extensive seagrass meadows extending along its entire Caribbean Sea coast, but no national seagrass map or systematic in situ monitoring on seagrass ecosystems. The limited understanding of spatial and temporal trends does not help to develop informed national conservation and restoration strategies, or national blue carbon accounting. Here, we describe results from a new remote sensing study to evaluate the first national seagrass map along the whole Venezuelan Caribbean coast, derived from remote sensing techniques. One of the strategies to produce an initial regional seagrass map is via a multitemporal composite approach. This has been done in Greece, the Mediterranean Sea, East Africa, Bahamas and Seychelles. Often, issues of image quality, cloud cover, sun glint, and atmospheric and water turbidity reduces the pool of viable images for composition. Even when the cloud cover metadata is used to filter out excessively cloudy images, some cloudy pixels remain in the pool of filtered images, causing cloud artefacts in the composite image and compromising its quality. To improve the composite quality of the composite image and its derived seagrass map, we tested a new Google Earth Engine product: the Cloud Score+, which provides a per-pixel quality assessment (QA) band based on an atmospheric similarity model and a space-time context network model. The Cloud Score+ product has two bands, the Cloud Score probability score (CS) or the cumulative distribution function of this cloud score QA band (CDF). We compare the performance of Cloud Score+ derived products against previously established multi-temporal image composites acquired in different time ranges, and the more conservative ACOLITE-processed single image composite using Sentinel-2 (S2) Level-1C (L1C) imagery in the whole Venezuelan coastline. The S2 L1C imagery was processed following three different approaches: 1) using a multi-temporal composition of the full S2 L1C archive available and processed in GEE; 2) integrating Cloud Score+ dataset into the previous approach; and 3) using a single-image offline approach applying ACOLITE atmospheric correction which has been widely used for water applications. All images were further processed from L1C to L2A remote sensing reflectance (Rrs), for the purpose of comparability. Additional image features such as Gray Level Co-occurrence Matrix (GLCM) and Principal Component Analysis (PCA) were generated. The training data were randomly split into roughly 70 and 30% for training and test, respectively. This was bootstrapped 20 times to produce 20 sets of training and test data for the classification and validation. Per bootstrap, a first classification was trained on the 70% training dataset with Random Forest in GEE. A variable selection was performed on GEE using their native ee.Classifier.explain function, and only the top ten features were retained. A second classification was trained using these top 10 features on the 70% training dataset. We defined five classes for the classification, namely sand, seagrass, turbid, deep waters, and coral. For the training and test design, the point data were obtained along the coast and intertidal areas of the whole nation through existing literature, data banks and visual interpretation. We found that the performances across the different thresholds within the CS or CDF composites were largely similar, with small differences in their confidence intervals. In terms of temporal range for the multi-temporal processing, the full archive seven–year composite had the most consistent quantitative performance over the two different optical water types, achieving an F1 Score of seagrass class of 0.664 with a 95% Confidence Interval (CI) [0.634, 0.695] and 0.631 [0.588, 0.675] for coastal and open waters, respectively. For coastal waters, the ACOLITE composite had a very competitive F1 Score and best Overall Accuracy (OA) performance at 0.668 [0.649, 0.688] and 0.781 [0.649, 0.688], respectively. For open waters, the full archive seven–year composite performed best, with both the CS and CDF product having a comparable performance. The ACOLITE composite had the weakest quantitative performance in open waters, although its confidence intervals do overlap with all the other three products. Qualitatively, the ACOLITE composite was deemed to instead perform better than its competitors. In optically clear waters such as the open reef waters, where the main concerns were clouds and cloud shadows, the simpler Cloud Score+ products provided a pragmatic alternative to both the full archive and the ACOLITE products. For optically complex waters, it was better to rely on either a larger temporal interval or the ACOLITE atmospheric processor. Based on this comparison of the various products, the full archive seven–year composite forms a good initial baseline for the baseline national seagrass map for Venezuela.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: iMERMAID Project: Integrating Satellite and In-Situ Data for Water Pollution Identification in the Mediterranean Basin

Authors: Sofiia Drozd, Bogdan Yailymov, Pavlo Henitsoi, Andrii Shelestov, J. Donate, M. Milián, J. Rostan, R. Sedano
Affiliations: National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, Space Research Institute NAS Ukraine and SSA Ukraine, ITCL Technology Center of Spain “Instituto Tecnológico de Castilla y León”
Monitoring and improving water quality in the Mediterranean Sea is critical for preserving its unique biodiversity and addressing environmental challenges caused by anthropogenic activities. The Mediterranean Sea serves as a hotspot of ecological and economic importance, yet it faces significant threats from chemical pollution and overexploitation. As part of the Horizon Europe iMERMAID project, “Innovative Solutions for Mediterranean Ecosystem Remediation via Monitoring and Decontamination from Chemical Pollution”, our research focuses on advancing satellite-based methodologies to monitor key water quality indicators, specifically chlorophyll-a concentration and water turbidity. These indicators are vital for assessing biological productivity, phytoplankton dynamics, and water clarity, providing insights into the health of marine ecosystems. Traditional methods of measuring chlorophyll-a and water turbidity rely on costly and time-intensive laboratory analyses, which are limited in spatial and temporal scope. In contrast, satellite remote sensing offers an efficient and scalable solution for monitoring large and diverse marine areas. Leveraging satellite data from Sentinel-2, Sentinel-3, MODIS, and GCOM-C missions, the iMERMAID project develops integrated methodologies that combine spectral band analysis, in-situ measurements, and advanced machine learning models. Our research prioritizes improving the spatial and temporal resolution of chlorophyll-a and turbidity data to facilitate effective environmental management and pollution remediation strategies. A key innovation in our approach is the use of machine learning models, including Random Forest (RF) and multilayer perceptron (MLP), to analyze the non-linear relationships between spectral satellite data and in-situ chlorophyll-a measurements [1]. For example, regression models applied to GCOM-C and Aqua MODIS data achieved significant accuracy improvements, with RF models yielding an R² of 0.603 (RMSE = 0.008) for GCOM-C and R² of 0.74 (RMSE = 0.006) for Aqua MODIS. By downscaling coarse-resolution data (e.g., MODIS and GCOM-C) and upscaling Sentinel-3 data, we enhanced spatial resolution from 4 km to 300 m, making these models particularly effective for coastal regions where traditional methods often fail due to complex environmental conditions [2-4]. The integration of in-situ measurements allows us to validate and refine model predictions, ensuring consistency and accuracy in highly dynamic environments like the Mediterranean Sea. In addition to chlorophyll-a monitoring, the project addresses water turbidity by quantifying suspended particulate matter using satellite-derived spectral data. This parameter is critical for identifying sediment transport, pollution hotspots, and other ecological disturbances. By combining data-driven insights with high-resolution mapping capabilities, our methodologies enable timely detection of pollution and provide actionable information for marine ecosystem remediation. A crucial component of the project is the integration of maritime traffic density data to establish potential correlations between anthropogenic activity and water pollution. Using data from the EMODnet Map Viewer, historical navigation patterns in the Mediterranean Sea were analyzed, focusing on regions of high, medium, and low traffic densities. Areas of interest include regions with significant maritime activity, such as the southern Italian coast, the Balearic Islands, and northern Libya, alongside relatively lower-traffic zones like eastern Crete. This approach identifies pollution risks linked to shipping routes, oil spills, and port activities, complementing water quality assessments. Findings reveal a significant correlation between high maritime traffic areas, such as near Malta, and increased occurrences of oil spills, underscoring the role of vessel density in environmental contamination. Additionally, PRISMA images were utilized to explore potential links between satellite images and potential pollutants, such as water turbidity, to evaluate the utility of hyperspectral data for monitoring water quality indicators in the Mediterranean basin [5]. The results of the iMERMAID project demonstrate the potential of advanced remote sensing and data analytics to transform water quality monitoring in marine ecosystems. The integration of multiple data sources and machine learning techniques not only enhances monitoring accuracy but also supports sustainable management strategies. These methodologies are applicable to a wide range of use cases, including early warning systems for pollution, biodiversity conservation, and sustainable fisheries management. Acknowledgment This research was carried out within the Horizon Europe iMERMAID project “Innovative Solutions for Mediterranean Ecosystem Remediation via Monitoring and Decontamination from Chemical Pollution” (Grant agreement 101112824). References 1. P. Henitsoi, A. Shelestov, Transfer Learning Model for Chlorophyll-a Estimation Using Satellite Imagery, International Symposium on Applied Geoinformatics 2024 (ISAG2024), Wroclaw, Poland, 2024, p. 54. https://www.kongresistemi.com/panel/UserUploads/Files/ a3fe58047d50fbc.pdf. 2. B. Yailymov, N. Kussul, P. Henitsoi, A. Shelestov, Improving spatial resolution of chlorophyll-a in the Mediterranean Sea based on machine learning, Radioelectronic and Computer Systems 2024 (2024) 52–65. https://doi.org/10.32620/reks.2024.2.05. 3. H. Wu, W. Li, Downscaling land surface temperatures using a random forest regression model with multitype predictor variables, IEEE Access 7 (2019) 21904-21916. https://doi.org/10.1109/ACCESS.2019.2896241. 4. J. Peng, A. Loew, O. Merlin, N.E. Verhoest, A review of spatial downscaling of satellite remotely sensed soil moisture, Reviews of Geophysics 55 (2017) 341-366. https://doi.org/10.1002/2016RG000543 5. Amieva, Juan Francisco, Daniele Oxoli, and Maria Antonia Brovelli. "Machine and Deep Learning Regression of Chlorophyll-a Concentrations in Lakes Using PRISMA Satellite Hyperspectral Imagery." Remote Sensing 15.22 (2023): 5385. https://www.mdpi.com/2072-4292/15/22/5385
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Harmful Algal Bloom Monitoring for Sustainable Aquaculture Using Earth Observation

Authors: Nikola Geršak, Dragan Divjak, PhD Mirko Barada
Affiliations: LIST LABS LLC
Harmful Algal Blooms (HABs) pose a significant threat to aquaculture, causing substantial economic losses and environmental challenges. The AlgaeDataB is an ongoing project that aims to address this issue by developing an automated, user-friendly Earth Observation (EO) based web service for monitoring and mapping HABs around fish farms. The AlgaeDataB system monitors several harmful algae species that pose risks to aquaculture, including Karenia mikimotoi, Pseudochattonella, Chaetoceros, and Prymnesium. These species are known to cause significant damage to fish stocks and require continuous monitoring in order to avoid further damage in aquaculture management. The system leverages two complementary Copernicus missions for comprehensive HAB detection. Sentinel-3 OLCI operates with 21 spectral bands optimized for ocean color monitoring, enabling differentiation of water constituents including chlorophyll-a, colored dissolved organic matter (CDOM), and suspended sediments at mesoscale. Sentinel-2 MSI complements this with its higher spatial resolution, making it particularly valuable for detailed mapping of coastal areas and fjords where aquaculture operations are concentrated. This multi-sensor approach enables both broad-scale operational surveillance and local monitoring, ensuring effective coverage during critical summer and early autumn periods when major HAB events typically occur. A core innovation of AlgaeDataB is its three-tiered alert system, designed to empower fish farms with timely insights: - Acute stage: Immediate response required within 3 days. - Pre-acute stage: Early warning with a 3-7 day lead time. - Preventive stage: Continuous monitoring for long-term planning. These alerts are informed by satellite-based monitoring, validated with ground-truth data from water sampling and laboratory analysis. This dual-layer validation ensures fine detection and characterization of HAB events, offering high reliability for end-users. The project represents a unique collaboration between technical experts and aquaculture stakeholders, including major Norwegian salmon farming companies. AlgaeDataB is being designed to cater to both large-scale and mid-sized fish farms, with an emphasis on early detection, operational efficiency, and economic sustainability. A dedicated customer support system and iterative service optimization based on user feedback ensure its practicality and adoption. Also, it bridges the gap between research and application, progressing towards a proof-of-concept demonstration. The scalable framework positions itself as a commercially viable solution for the aquaculture industry, aligning with the goals of the European Green Deal and global sustainability initiatives. The AlgaeDataB service will demonstrate the transformative potential of EO technology in addressing climate-driven challenges to aquatic ecosystems. By providing timely and accurate HAB risk assessments, the project will enable aquaculture operators to mitigate losses, enhance resilience, and contribute to sustainable marine resource management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SAMSelect: An Automated Spectral Index Search for Marine Applications for Multi-Spectral Satellite Images

Authors: Joost van Dalen, Marc Rußwurm
Affiliations: Wageningen University
In this abstract, we present SAMSelect, an algorithm designed to generate three-channel visualisations for multispectral Sentinel-2 images for visual analysis of marine scenes and marine litter. It supports marine scientists in monitoring key marine and coastal biodiversity indicators with openly available Sentinel-2 imagery. Visual inspection remains a cornerstone of marine data analysis, especially for complex targets like marine litter, algal blooms, and oil spills, which are difficult to identify in medium-resolution imagery due to their spectral diversity and environmental complexity. Direct visual inspection enables experts to apply their domain knowledge to detect subtle patterns and contextualise significant events, such as oil spill spread or harmful algal blooms, which require a deep understanding of marine conditions. However, selecting the best spectral bands and indices for these tasks is often time-consuming, relying on best practices trial-and-error, and it is only sometimes clear what visualisation is optimal for a particular time of object visible in the marine scene. SAMSelect is an algorithm that systematically tests all possible band combinations, normalised band indices, and spectral-shape indices and evaluates the visualisation’s effectiveness by measuring how accurately the AI Segment Anything Model (SAM) can detect some few objects pre-specified by the user. The underlying assumption is that a visualisation suitable for SAM to identify objects is also helpful to the expert’s eyes. Crucially, SAMSelect can systematically and automatically explore a large number of visualisations. While SAMSelect can be applied to any multi-spectral image, we explicitly evaluated it on marine debris visualisation that is naturally heterogeneous in composition. Our results show that the visualisations found outline other floating objects well, such as algal blooms and oil spills. Concretely, we tested SAMSelect on three study sites: marine litter hotspots near the Bay Islands, Honduras; red tides along Oléron Island, France; and oil spill areas in the Caribbean Sea. Each phenomenon is associated with specific indices; for example, marine litter can be detected using the Floating Debris Index or Plastic Index, while algal blooms and oil spills are often tracked with indices such as the Floating Algae Index and the Oil Spill Index. We would like to present SAMSelect to marine researchers in this Symposium to explore its effectiveness in enabling domain experts to produce more accurate and interpretable visualizations. Incorporating expert annotations to narrow the search space has accelerated the algorithm while still allowing it to adapt to each phenomenon's unique spectral and contextual attributes, increasing segmentation accuracy and visual clarity. This submission focuses on SAMSelect’s expanded applications, underscoring its potential to advance satellite-based monitoring of critical biodiversity indicators impacting Ocean Health. By providing an open-source code repository, we aim to support its broader use across environmental research and marine management, promoting better-informed conservation and response efforts in marine and coastal ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The CNES Ocean program: New sensors and future missions to monitor the ocean Health

Authors: Yannice Faugere, Dr Jacqueline Boutin, Aurelien Carbonniere
Affiliations: CNES, CNRS, Sorbonne Universite, LOCEAN
The French Space Agency (CNES), through its Earth Observation Program has for the last decade strongly contributed to ocean observation. The launch of the French/US mission Topex/Poseidon (T/P) (CNES/NASA) in August 1992 was the start of a revolution in oceanography. For the first time, a very precise altimeter system optimized for large-scale sea level and ocean circulation observations was flying. With its unique capability to observe the global ocean in near-real-time at high resolution satellite altimetry is today an essential input for global operational oceanography and ocean crucial for ocean Health monitoring. In parallel of altimetry success story, CNES participated to the observation of other ocean parameters: the Sea Surface Salinity through its contribution to SMOS in partnership with ESA, the waves with the development of the first scatterometter for wave spectra measurement, SWIM onboard CFOSAT, a French / Chinese Satellite….The launch, again with its historical partnership NASA, of the Surface Water and Ocean Topography (SWOT) satellite on December 16th 2022 allows us for the first time to measure 2D images of the ocean topography with unprecedented resolution and opens a new era for oceanography. However, despites all the flying ocean missions, many scientific questions remain about our understanding of how the ocean works, which requires the knowledge of the ocean's global evolution, its coupling with the atmosphere, polar zones and the role of fine oceanic scales, its interfaces with the earth's surface, and its physical, biogeochemical and ecological properties. Understanding, monitoring and forecasting the state of the of the ocean relies on the complementarity of spatial, in-situ measurements and numerical modeling. In addition to the need for continuity of space-based observations, new observables, increased spatio-temporal resolutions and new tools for combining these large sets of information (e.g. digital twins) are needed to meet these challenges, better assess the ocean's role in climate and marine biodiversity, and better guide marine biodiversity, and to better guide environmental policies and measures mitigation and adaptation measures. Faced with these challenges, the CNES/TOSCA scientific group has identified major issues around which it has structured its scientific outlook such as wind-current-wave couplings, fine scale salinity linked to freshwater inputs and feedbacks, the land-ocean continuum, the evolution of the biological Carbon pump and Marine biodiversity, the Climate system variability, trends and tipping point…. This presentation will give an overview of the CNES space oceanography current and future program on the for the period 2025-2029 with the priority to better understand and in fine protect the ocean health.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrated Methodology for Forecasting Sargassum Strandings

Authors: Audrey Minghelli, Sarah Barbier, Dr. Léo Berline, Dr. Léa Schamberger, Prof. Malik Chami, Dr. Cristèle Chevalier, Dr. Alex Costa Da Silva, Dr. Luc Courtrai, Boubaker Elkilani, Pierre Daniel, Warren Daniel, Dr. Marianne Debue, Dr. Jacques Descloitres, Prof Jean-Raphael Gros-Desormeaux, Dr. Thibault Guinaldo, Marine Laval, Jeremy Lepesqueur, Dr. Christophe Lett, Prof. Anne Molcard, Philippe Palany, Dr. Witold Podlejski, Dr. Adan Salazar, Dr. Stéphane Saux-Picart, Rose Villiers
Affiliations: University Of Toulon-LIS laboratory, MeteoFrance, Aix Marseille Université-MIO, Sorbonne Université-Observatoire de la Côte d’Azur, Universidade Federal de Pernambuco, Université de Bretagne Sud, Université de Lille, Université des Antillles, IRD-Marbec, Mexican Space Agency
The synergy of satellite data, ocean transport modeling, and in-situ measurements plays a crucial role in enhancing forecasts of invasive Sargassum algal strandings in the tropical Atlantic Ocean, the Caribbean Sea, and along the Brazilian coast. A methodology using remote sensing techniques for detecting and monitoring Sargassum algae on temporal scales ranging from hourly to daily has been developed through multi-sensor satellite data analysis, incorporating both Low Earth (Sentinel-2 and-3and VIIRS), and Geostationary orbit observations (GOES, MTG). Different methods of detection were developed based on agal index, inversion of radiative transfer model but also on artificial intelligence. The aggregation velocity is obtained using geostationary sensors. The spatial distribution of Sargassum aggregations has been analyzed using satellite sensors with resolutions between 20 meters and 5 kilometers. In-situ data were collected from the Caribbean Sea in order to validate the delivered products. To address societal concerns, alert bulletins tailored for end-users such as local authorities, the tourism industry, and fishermen have been designed. This study presents an integrative approach to tackle Sargassum stranding issues by combining satellite data, knowledge on spatio-temporal distribution, and transport forecasting models. Enhancements to each system component will enable societal authorities to mitigate more effectively the risks associated with the increasing frequency and intensity of Sargassum blooms in the Atlantic Ocean.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: F.04.06 - POSTER - Wetlands: from Inventory to Conservation

Wetlands are an essential part of our natural environment. They are scattered across the world in all bio-geographic regions, providing a range of critically important ecosystem services and supporting the livelihoods and well-being of many people. For much of the 20th century, wetlands have been drained and degraded.

The Ramsar Convention on wetlands is an intergovernmental treaty that provides the framework for national actions and international cooperation for the conservation and wise use of wetlands, as a means to achieving sustainable development. The 172 countries signatory to the convention commit, through their national governments, to ensure the conservation and restoration of their designated wetlands and to include the wise use of all their wetlands in national environmental planning.

Wetland inventory, assessment and monitoring constitute essential instruments for countries to ensure the conservation and wise use of their wetlands. Earth Observation has revolutionized wetland inventory, assessment and monitoring. In the recent years, the advent of continuous data streams of high quality and free of charge satellite observations, in combination with the emergence of digital technologies and the democratisation of computing costs, have offered unprecedented opportunities to improve the collective capacities to efficiently monitor the changes and trends in wetlands globally.

The importance of EO for wetland monitoring has been stressed by Ramsar in a recently published report on the use of Earth Observation for wetland inventory, assessment and monitoring.

The SDG monitoring guidelines on water related ecosystems (SDG target 6.6) also largely emphasize the role of EO, while the EO community is getting organised around the GEO Wetlands initiative to provide support to wetlands practitioners on the use of EO technology.

The Wetland session will review the latest scientific advancements in using Earth observations for wetland inventory, assessment, and monitoring to support effective wetland conservation. It will also discuss strategies for integrating Earth observations into the sustainable management of wetland ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Preliminary Analysis on long-term human activities around wetlands using VIIRS DNB data

Authors: Shi Qiu, Dr. Peter Dorninger, Dr. Wei Zhao, Dr. Jianzhong Zhang, Stefan Schlaffer, Ms. Yu Zhang
Affiliations: Aerospace Information Research Institute, Chinese Academy Of Science, 4D-IT GmbH, Institute of Mountain Hazards and Environment, Chinese Academy Of Science, Beijing Esky Technology Ltd., GeoSphere Austria
In recent years, an increasing number of scientists are adopting new emerging hardware and software capabilities to monitor the environment in order to obtain trends in environmental changes for targeted environmental protection. Among these, wetlands, as one of the important ecosystems on Earth, play a multifaceted role in the environment, such as carbon storage, climate regulation, and soil conservation. Protecting wetlands is crucial for maintaining ecological balance and human well-being. Globally, the protection and restoration of wetlands are receiving more and more attention. Remote sensing, as an important means of Earth observation, has the advantages of wide coverage, high temporal resolution, long time series, and high efficiency. These advantages make remote sensing technology indispensable in various fields such as environmental monitoring and resource management. Especially the Suomi National Polar-orbiting Partnership (SNPP) satellite launched by the United States at the end of 2011, equipped with the Visible Infrared Imaging Radiometer Suite (VIIRS) payload, has the characteristic of visible light imaging at night, which can reflect human activities at night, filling the gap in traditional remote sensing technology for nighttime monitoring. This study uses VIIRS-Day and Night Band (DNB) nighttime light data for a long time series data analysis (from 2013 to 2024) of wetland parks in Austria and wetlands in China, analyzing the changes in human activities around wetlands over more than a decade, providing data and technical evidence for the changes in human activities and wetlands. Currently, the study has analyzed the nighttime light data from the summer of 2013 to the summer of 2024 and found that the nocturnal light data in the area of Vienna, Austria, decreased by 20% from 2013 to 2019 and has been stable from 2019 to 2024 , this might be caused by the application of a new type of street lights; the data around the wetlands decreased by about 16% from 2013 to 2019, rebounded by about 10% in 2020, and then decreased by about 20% from 2020 to 2024. The study suggests that the changes may be due to the impact of COVID and changes in vacation patterns , more people used their weekend at home during pandemic.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating Low-Cost Uncrewed Aerial Systems (UAS) and Satellite Data for Mangrove Monitoring and Conservation: A Case Study From Seychelles

Authors: Marlene Bauer, Anna Bischof, Antonio Castañeda-Gómez, Rafaela Gameiro, Corinne Julie, Dr. Nirmal Jivan Shah, Dr. Doris Klein, Dr. Prof. Stefan Dech, Dr. Martin Wegmann, Dr. Mirjana Bevanda
Affiliations: Earth Observation Research Cluster, Institute of Geography and Geology, Julius-Maximilians-Universität Würzburg, Nature Seychelles, The Centre for Environment and Education, Earth Observation Center, German Aerospace Center
Mangrove forests are crucial ecosystems that provide ecological, economic, and social benefits, including coastal protection, carbon sequestration, water quality improvement, and biodiversity support. Despite their significance, mangroves are increasingly threatened by human activities such as climate change, urbanization, and deforestation, highlighting the urgent need for effective monitoring methods. Traditional field surveys are time-intensive and remain spatially limited. In recent years, remote sensing has enabled large-scale mangrove mapping, health assessment, and change detection, greatly supporting conservation and management efforts. However, these datasets are often limited on the temporal scale and their spatial resolution restricts their ability to capture fine-scale changes. Uncrewed Aerial Systems (UASs) represent an advancement in remote sensing, complementing satellite-based monitoring by providing high-resolution data suitable for detailed spatial and temporal analysis. Previous studies have shown that UASs can be used to monitor mangroves by estimating biophysical properties such as canopy height and coverage, above ground biomass, as well as supporting habitat assessments, including individual tree species identification and invasive species detection. However, the potential of UASs to provide detailed spatial and temporal data for mangrove health monitoring has yet to be fully investigated. The Seychelles’ mangrove ecosystems cover 2,195 ha, with Mahe hosting the greatest diversity, including seven of the archipelago’s true mangrove species. Approximately 69% of Mahe’s mangrove forests are protected, including the Port Launay Coastal Wetlands, a Ramsar site since 2004. The island contains the second-largest mangrove extent in Seychelles (181 ha), contributing 12% of the nation’s mangrove carbon stock and providing critical climate change mitigation and adaptation services. This study aims to assess the potential of low-cost UASs for mangrove species mapping and health assessment to guide local conservation efforts. In collaboration with Nature Seychelles, field data was collected between 25 November and 13 December 2024, using a DJI Mavic 3 Pro to capture high-resolution RGB imagery. While it is important to acknowledge that even what is considered a low-cost UAS remains expensive, especially for small NGOs in the Global South, the DJI Mavic 3 Pro represents the lower end of the price range spectrum. It offers a cost-effective option for conservation applications. To the authors’ knowledge, this is the first study to integrate UAS and satellite data for mangrove analysis in Seychelles. The focus on consumer-grade technology and open-source processing tools aims to create accessible datasets to assist local monitoring efforts. Openly available optical and SAR satellite data will be combined with UAS imagery to address critical knowledge gaps in habitat monitoring, health assessment, and tracking dynamic changes. Finally, this work developed a scalable methodology for integrating UAS and satellite data, contributing to global mangrove conservation initiatives.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Large-Scale Wetland Mapping Using Self-Supervised Learning and Vision Transformer

Authors: Mohammad Marjani, Dr. Fariba Mohammadimanesh, Dr. Masoud Mahdianpari, Dr. Eric W. Gill
Affiliations: Memorial University Of Newfoundland, Natural Resources Canada, C-Core
Wetlands are unique ecosystems where freshwater and saltwater environments intersect, offering numerous environmental benefits. However, since the early 20th century, nearly two-thirds of the world's wetlands have been lost or severely degraded. In Canada, wetland loss has historically been attributed to land-use changes brought about by European settlement, urban development, climate change, agricultural activities, and impacts like runoff diversion and pollution. These changes affect the ability of wetlands to deliver essential ecosystem services, including carbon storage, water filtration, and economic support. Therefore, preserving these ecosystems safely and healthily is crucial. Accurate wetland distribution maps are essential for protecting wetlands and understanding their spatial distribution, ecosystem functions, and temporal changes. Traditional methods, such as vegetation surveys and water and soil sampling, are time-intensive, require significant effort, and are impractical in remote areas. Remote sensing has been used as an efficient and cost-effective alternative to address these challenges using optical and radar satellite systems. However, accurately classifying wetlands using remote sensing data remains challenging due to their complex, heterogeneous nature, fragmented landscapes, and overlapping spectral characteristics among wetland types. Temporal and spatial variability further complicates classification, requiring advanced methods for precise wetland mapping. Wetland mapping studies often focus on small regions constrained by satellite data and computational resources. However, algorithm performance typically declines when applied to larger areas, highlighting the need for robust approaches to maintain high accuracy in large-scale wetland mapping. Recent advancements in machine learning (ML) and deep learning (DL) have significantly enhanced wetland mapping using remote sensing data. Studies have employed convolutional neural networks (CNNs) and hybrid models combining CNNs and vision transformers (ViT), achieving high accuracy. However, these methods are typically limited to small-scale regions due to their reliance on extensive labeled training datasets, which becomes impractical for large-scale mapping. Self-supervised learning (SSL) offers a comparable alternative by learning patterns from unlabeled data, reducing dependence on labeled datasets. Techniques like SimCLR (simple framework for contrastive learning of visual representations) have succeeded in remote sensing applications. However, their application to large-scale wetland mapping still needs to be explored, presenting further research opportunities. Therefore, this study aims to use the potential of SimCLR for large-scale wetland mapping using Seintinel-1 (S1) and Sentinel (2). This study focuses on Newfoundland, a large Canadian island spanning 108,860 km², known for its diverse wetland ecosystems, including bogs, fens, and marshes. Wetlands cover approximately 18% of the island, with peatlands dominating the landscape. Ground truth data were collected in Newfoundland during field surveys conducted in 2015, 2016, and 2017. The surveys covered over 1,200 sites, including wetlands and non-wetlands, across areas like Avalon, Grand Falls-Windsor, Deer Lake, and Gros Morne. The Canadian Wetland Classification System (CWCS) was used to categorize wetlands into bogs, marshes, fens, and swamps, with peatlands being the most common. GPS data were recorded at each site, along with metadata such as location names, dates, photos, and notes on vegetation and hydrology. Early surveys in 2015 included wetlands of all sizes, but from 2016 onward, the focus shifted to areas larger than 1 hectare. Experts reviewed and labeled the data using high-resolution Google Earth imagery, and the dataset was divided into 70% for training and 30% for validation to support wetland classification. S1 and S2 satellite data were collected over the study area. S1 provides 10 m resolution radar data with four polarization bands (HH, VV, HV, VH), which is ideal for detecting water bodies under dense vegetation. S2 offers optical data with a multispectral sensor, including 10 m and 20 m resolution bands, enhancing mapping accuracy. The missions’ frequent revisit times (every 5–6 days) enable timely monitoring over large areas. This study used S1 and S2 data from the summers of 2022, 2023, and 2024. To manage Newfoundland’s large scale, the island was divided into 28 equally sized regions, confirming efficient and consistent data collection across the study area. In addition to the satellite bands, S2 bands were used to calculate three indices, including the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Built-up Index (NDBI), and the Normalized Difference Water Index (NDWI). SimCLR extracts visual features from unlabeled data using random augmentations to create two unique perspectives of the same image. This process helps the model identify variations of the same image while distinguishing it from others. This study utilized a range of augmentation methods, such as channel shuffling, spectral jitter, random band cropping, rescaling, CutMix, random cutout, rotation, and flipping. These augmentations introduced diverse modifications to the input data. For each image, two distinct variations were generated by applying these augmentations randomly, enabling the creation of paired views for representation learning. SimCLR employs a contrastive loss function to bring the embeddings of two augmented views of the same image closer together in the feature space while pushing embeddings of different images further apart. This process treats each augmented pair as positive and all other pairs in the batch as negatives. The loss function used, NT-Xent (Normalized Temperature-scaled Cross Entropy), ensures that positive pairs are closely aligned and negative pairs are well-separated in the embedding space. In the SimCLR framework applied to wetland mapping, the ViT served as the encoder, using its self-attention mechanisms to extract spatial and contextual relationships within satellite imagery. The ViT architecture processes input data by dividing images into fixed-size, non-overlapping patches, then flattened and projected into a high-dimensional space using a linear transformation. This approach enables the model to capture spatial patterns and contextual interactions across the image, which is crucial for identifying complex wetland features. Following the training phase, the SimCLR model was fine-tuned using the available training images and evaluated on the validation dataset. To assess the model's performance, precision, recall, and F1-score (F1) were calculated for both datasets. The results of the fine-tuning experiments, which involved varying proportions of the training dataset, as well as the direct ViT training without SimCLR, are investigated. As the amount of training data increased, the model's performance significantly improved across all wetland types. When fine-tuning with only 25% of the training images (Scenario 1), the model achieved the lowest performance, with precision, recall, and F1 scores ranging from 0.55 to 0.79 for different wetland types. In contrast, fine-tuning with 50% (Scenario 2) and 75% (Scenario 3) of the training data resulted in notable improvements, particularly for bogs and marshes, with the F1 score reaching up to 0.89 for bogs and 0.88 for marshes. The highest performance was observed when 100% of the training images were used (Scenario 4), with precision, recall, and F1 scores peaking at 0.93, 0.92, and 0.92 for bogs, and 0.90, 0.89, and 0.89 for marshes, respectively. In comparison, training the ViT model directly on all images without the SimCLR framework (Scenario 5) resulted in slightly lower performance across most wetland types, particularly for fens, which saw a reduction in precision (0.86) and recall (0.88) compared to the best-fine-tuned models. This suggests that the SimCLR-based approach, particularly when fine-tuned with larger training datasets, significantly improves wetland classification task performance.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Tropical Wetland mapping system (TropWet) reveals profound changes in wetland extent across the Sahel region of Africa

Authors: Gregory Oakes, Dr Andy Hardy, Dr Pete Bunting, Lammert Hilarides, Mori Diallo, Edmond Kuto, Charlie Avis
Affiliations: Aberystwyth University, Wetlands International, Wetlands International Sahel Office, Wetlands International East Africa
Sahelian Wetland systems represent important ecosystems, particularly for migratory bird species and endangered mammals as well as providing vital services for millions of people. Yet wetlands across this region are relatively under studied with a scarcity of data and, in some instances, an absence of reliable wetland inventories. There is a pressing need to better understand the extent and dynamics of Sahelian wetlands where water resource and land use pressures has seen the degradation of wetland ecosystems across the extent of the Sahel. Equally, climatic events such as El Niño and positive phases of the Indian Ocean Dipole have led to dramatic changes in rainfall patterns resulting in devastating flooding in Nigeria and Mali, and globally significant increases in natural methane emissions reported over wetlands in South Sudan. TropWet, hosted on Google Earth Engine, is a tropical wetland mapping system, based on the analysis of Landsat imagery alongside hydrological terrain metrics. Specifically, linear spectral unmixing is applied using pre-determined endmembers, providing pixel level fractions of water, bare soil, vegetation and burn scar. Information of fractional cover is combined with spectral indices and terrain information within a fuzzy-optimised rule base to classify open water, inundated vegetation and other non-inundated land cover classes. The system was applied to bi-monthly Landsat and Sentinel-2 composites from 2014-23 over the Sahel region. Resulting maps were used to reconstruct the inundation history across the region, as well as generating inventories using the International Union for Conservation of Nature’s Global Ecosystem Typology by cross-walking with existing data including the Global Lakes and Wetlands Dataset. Resulting maps indicate that inundated vegetation accounts for a mean contribution of 40% (Std Dev: 15%) of the total wetted area. This represents a significant improvement on existing data, such as the Global Open Surface Water layer that only accounts for open water, thereby undermining the true extent of Sahelian wetlands. Capturing this information is important as it has been reported that inundated vegetation, particularly papyrus and phragmites, represent major sources of natural methane emissions. We examine changes in inundation extent between the periods 2014-18 and 2019-23. The former 5-year block is indicative of the long-term average of rainfall conditions, whereas the latter represents a significant increase in rainfall, particularly across central and eastern parts of the Sahel, attributed to a strong Indian Ocean Dipole event. TropWet demonstrates a 191% increase in overall inundation extent between these two periods with profound impacts on the livelihoods of the people living within these wetlands. Notable change hotspots are indicative of widespread devastation in the Inner Niger Delta and further downstream in northeast Nigeria due to extensive flooding; significant losses of agricultural land and livestock in the Sudd, leading to a complex humanitarian crisis for millions of people. Furthermore, we identity a shift in seasonality, with large scale areas previously characterised as seasonal wetlands becoming permanent. This has important implications in terms of methane emissions and the composition of ecosystems. TropWet represents a tractable solution, not only for generating wetland inventories in a timely manner but also charting inter and intra-annual inundation dynamics. Illustrated by the contrasting situations between 2014-18 and 2019-23 we emphasise the importance of providing up to date information on wetland extent and characteristics, as opposed to using static information that can quickly become outdated and inaccurate. In doing so, we can improve the way in which these regions can be managed to help safe-guard livelihoods and protect and enhance these important ecosystems. By demonstrating transferability across the Sahel region, we provide confidence that TropWet can be deployed across the African continent providing wetland inventories in data scarce regions, as well as providing information on how wetlands are reacting to increasing pressures and a changing climate.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Prototyping a Policy-Driven Earth Observation Service for Monitoring Critical Wetland Habitats in Natura 2000 Sites

Authors: Christelle Vancutsem, Bruno Combal, Meriam Lahsaini, Pavel Milenov, Frank Vassen
Affiliations: Joint Research Center - European Commission, European Commission (DG ENV), European Environment Agency (EEA), Arcadia SIT for the Joint Research Centre (European Commission)
The EU Habitats Directive mandates the protection and monitoring of wetland habitats within Natura 2000 sites. However, comprehensive and timely assessment of wetland conservation status remains challenging. The reporting under article 17 of the Habitats directive is missing the detailed, spatially explicit information required for accurate assessment of wetland habitats conservation status, and in particular indicators of degradation. This initiative, developed in collaboration with the European Commission's DG Environment (DG ENV) and the European Environment Agency (EEA), aims to design an operational geospatial information system to monitor critical wetlands, detect degradation, and assess conservation status within Natura 2000 sites. Leveraging the Knowledge Centre on Earth Observation's (KCEO) policy-focused value chain and Deep Dive assessment methodology, we translate specific policy needs into technical requirements for Earth Observation (EO) products. We analyze the fitness-for-purpose of existing products and services, evaluating gaps, and provide recommendations to support the EU's commitment to biodiversity protection. Our approach extends beyond assessment to prototype a Policy-driven Service for monitoring wetlands on selected areas. Ongoing and planned key activities include: - Characterizing various European wetland habitats, their ecological functioning, and main pressures leading to degradation. - Determining appropriate indicators for selected habitats and the relevant EO products, prioritizing wetland types based on current degradation levels (per Article 17 of the Habitats Directive), relevance beyond the Directive, and biodiversity value. - Designing advanced spatial and temporal analysis tools for policymakers and conservation managers integrating cutting-edge EO technologies with ground-truth data and modelling. This project will enhance our understanding of wetland dynamics and support more effective implementation of EU environmental policies, including the Biodiversity Strategy 2030 and the Nature Restoration Law. The insights and methodologies developed through this project will serve as the foundation for implementing a comprehensive web-based platform for monitoring all wetlands across the EU.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An Efficient Hybrid CNN-Transformer Framework for Wetland Classification Using Multi-Source Satellite Data

Authors: Ali Radman, Masoud Mahdianpari, Fariba Mohammadimanesh
Affiliations: Memorial University of Newfoundland, C-CORE, Canada Centre for Remote Sensing, Natural Resources Canada
Wetlands are ecologically critical environments providing essential services such as climate regulation, water purification, and habitat preservation. However, their global decline necessitates accurate and efficient mapping methods for sustainable management. While field-based methods are resource-intensive and impractical for large-scale applications, satellite remote sensing provides an efficient alternative. Satellite remote sensing, leveraging multispectral and SAR datasets, offers promising tools for large-scale wetland classification. Despite recent advances, existing methodologies face challenges in computational efficiency and class differentiation accuracy, particularly for spectrally similar wetland categories. Recent advances in artificial intelligence, particularly hybrid deep learning architectures, offer promising solutions to these challenges. Convolutional neural networks (CNNs) excel at capturing local spatial features, while transformers are effective at modeling long-range dependencies. However, CNNs struggle with contextual information, and transformers are computationally expensive. Hybrid models that combine the strengths of both approaches provide an efficient framework for accurate wetland classification. This study introduces a hybrid convolutional-transformer model that fuses Sentinel-2 multispectral and Sentinel-1 SAR data for precise and efficient wetland classification. The proposed model leverages convolutional layers for capturing local spatial features and transformer blocks for long-range dependencies. A key innovation is the multi-head convolutional attention (MHCA) module, which optimizes the efficiency of traditional transformer mechanisms by integrating convolutional operations. The architecture also incorporates a local feed-forward network (LFFN) to preserve locality in spatial data, further enhancing the model's ability to handle the complexities of wetland classification. The model was evaluated using a robust dataset on wetlands in St. John’s, located on Newfoundland Island, Canada, consisting of 11 land cover classes, including diverse wetland types. Time-averaged Sentinel-1 SAR data with dual-polarized backscatter (VV, VH, HH, HV) and Sentinel-2 multispectral data with 10 optical and infrared bands were used to create a comprehensive dataset. The proposed model achieved state-of-the-art performance, with an overall accuracy (OA) of 95.36% and substantial improvements in challenging categories such as bog (94.79%), fen (90.57%), swamp (89.04%), and marsh (89.91%). Comparative analyses with CNN, transformer, and hybrid models demonstrated the proposed hybrid model's superiority in both accuracy and computational efficiency. Compared to CoAtNet (the second-best model), the proposed model demonstrated a 2% improvement, while surpassing ResNet and Swin by 4.97% and 6.61%, respectively. It also highlighted the advantages of multi-source data, with the fusion of Sentinel-1 and Sentinel-2 improving OA significantly over single-source configurations (93.73% for Sentinel-2 alone, 82.56% for Sentinel-1). These results underscore the model’s effectiveness in handling spectrally similar classes and leveraging complementary data sources. In addition to accuracy, the model achieves computational efficiency with reduced memory usage and training time by approximately 50% compared to leading alternatives. The study underscores the potential of hybrid architectures for overcoming the inherent challenges of wetland classification. By integrating the strengths of CNNs and transformers, the proposed model delivers a scalable solution for real-world applications, combining high accuracy with reduced computational demands. This model represents a significant step forward in remote sensing for ecological monitoring and sustainable management of wetlands.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Evaluating Sustainable Development Goal 15 Across Various Scenarios Using an Integrated Multi-objective Programming and Patch-generating Land Use Simulation Framework in the Internationally Significant Wetland of Momoge

Authors: Ms. Jiaqi Han, Prof. Dongyan Wang, Prof. Andreas Rienow
Affiliations: Jilin university, ruhr university
Wetlands are highly productive and valuable ecosystems that play a critical role in ensuring water security and supporting biodiversity. Internationally important wetlands accredited by the Ramsar Convention are vital to the global wetland conservation framework. Momoge Nature Reserve, a Ramsar site of international importance in China, serves as a vital resting site for 90% of the global crane population and waterfowl species migrating between Siberia and Oceania. However, because of the expansion of human production and living spaces, the wetlands in the study area face considerable issues of land degradation, environmental pollution, and resource wastage. Globally, human activity and climate change have reduced the global wetland area while posing significant challenges to achieving Sustainable Development Goals. Therefore, it is necessary to predict the spatial-temporal evolution of wetlands under various planning strategies and conduct sustainable development assessments at the small scale. To address these challenges, an integrated multi-objective programming and patch-generating land use simulation framework was developed to predict the future distribution of three wetland and seven non-wetland land types at 10 m resolution. Building upon this framework, this study presented three main innovations to support the SDG 15 report: fine-scale simulation of wetland distribution, integration of vision goals and SDGs with scenario design, and evaluation of wetland ecological sustainability. The ecological sustainability of wetlands was evaluated based on key SDG 15 indicators, including wetland coverage rate (SDG 15.1.2), land degradation rate (SDG 15.3.1), proportion of important bird habitats (SDG 15.5.1), and ecosystem service value (SDG 15.9). The study effectively assessed the level of wetland ecological sustainability from 2020 to 2035 under four scenarios: natural increase, agricultural development, wetland protection, and harmonious development. The results indicated that (1) the model has high simulation accuracy, as evidenced by its overall accuracy of 0.86, kappa of 0.84, and figure of merit of 0.61. (2) Under the wetland protection scenario, SDGs 15.1.2 and 15.5.1 achieved their highest values of 73.03% and 22.19%, respectively. Conversely, under all four scenarios, SDG 15.3.1 values declined, with the lowest value (5.64%) achieved under the harmonious development scenario; moreover, the land degradation neutrality target was not met. (3) The ecosystem service value increased under agricultural development, wetland protection, and harmonious development scenarios, with the largest increase (amounting to 6.51 billion CNY) achieved under the wetland protection scenario. This underscores the substantial ecological and economic benefits of conservation policies. (4) The overall ecological sustainability levels of the Momoge Nature Reserve failed to meet the expected standards under the four scenarios in 2035. Although the wetland protection scenario faced land degradation challenges, it was the optimal strategy for developing internationally important wetlands. The study reveals the critical role of scenario-based simulations in wetlands policy development. The findings presented herein highlight the necessity of policy support in achieving wetland conservation goals and advancing sustainable development. Such predictions bridge the gap between global objectives and practical local management actions, enabling regional managers to implement effective strategies aligned with international goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Unveiling four decades: Eco-Hydrology, land-use landcover classification & water quality estimation of Haiderpur wetlands through the lens of satellite imagery and AI

Authors: Abhinav Galodha, Dr Maria-Valasia Peppa, Dr Sanya Anees, Professor Brejesh Lall, Professor Shaikh Ziauddin Ahammad
Affiliations: School of Interdisciplinary Research (SIRe), Indian Institute of Technology, IIT Delhi, School of Engineering, Cassie Building, Newcastle University, Department of Electronics and Communication Engineering, Netaji Subhas University of Technology (NSUT), Department of Electrical Engineering, Indian Institute of Technology, IIT Delhi, Department of Biochemical Engineering and Biotechnology, Indian Institute of Technology, IIT Delhi
Wetlands are indispensable ecosystems that provide critical ecological services such as water purification, flood control, carbon storage, and habitat provision for diverse species. The Ramsar Convention, an international treaty signed in 1971, underscores the commitment of its contracting parties to the conservation and sustainable use of all wetlands, aiming to halt their degradation and loss. Haiderpur Wetland (Muzaffarnagar district and Bijnor district, Uttar Pradesh, India), a recognized Ramsar site in India, exemplifies a crucial ecological area within its region. This wetland has undergone significant changes over the past three decades due to various anthropogenic pressures, including urban expansion and agricultural intensification. This study aims to analyze land use and land cover (LULC) changes in the Haiderpur Wetland and its surrounding areas from 1990 to 2024. By utilizing advanced machine learning models such as Random Forest (RF), Convolutional Neural Networks (CNNs), XGBoost, and VGG16, the research provides a high-resolution, comprehensive understanding of these transformations. The methodological framework combines satellite imagery analysis with these cutting-edge models, enhancing precision in classification tasks crucial for temporal change detection. In 1990, the landscape of Haiderpur was predominantly vegetative, with forests and grasslands covering about 45% of the area. This rich natural habitat was integral not only for maintaining biodiversity but also for the sustenance of local communities that depended on these resources for their livelihoods. However, as the study reveals, by 2024 the vegetative cover has dramatically reduced to 25%. This decline signifies a substantial loss of biodiversity-rich areas, which are critical for ecological integrity and resilience. One of the primary factors driving this change is urban expansion. In 1990, urban areas constituted merely 15% of the region. Due to population growth and economic development, urban areas have surged to 50% by 2024. This rapid urbanization has reshaped the environment, influencing local climate patterns and contributing to the urban heat island effect, thereby exacerbating local climatic conditions. The application of CNNs has been instrumental in capturing spatial patterns from the satellite data, ensuring an overall classification accuracy of 95%. This model excels in identifying intricate land cover types and their transitions over time, which is vital for reliable LULC assessments. Moreover, XGBoost provided a robust framework for predictive analysis, highlighting critical environmental variables such as proximity to urban centres and changes in agricultural practices as significant contributors to land cover change. VGG16, a deep learning model fine-tuned for this study, further validated these classifications through superior accuracy in distinguishing urban from vegetative and aquatic areas. Significant changes in agricultural land and water bodies are observed. Agricultural land, which previously encompassed 22% of the study area, has decreased to 18%. This shift reflects changing land use priorities, where the economic allure of urban development often outweighs the traditional agricultural returns. Meanwhile, water bodies have seen a reduction from 18% to 12%, a change that could severely influence local biodiversity and water availability, further challenging the wetland's ecological functions. Seasonally, the study found marked fluctuations in vegetation health, impacting biodiversity and ecosystem services. This study investigates the land use and land cover (LULC) changes in the Haiderpur Wetland from 1990 to 2024, utilizing advanced machine learning techniques for precision and accuracy. By employing models like XGBoost for classification, supported by Sentinel-2 imagery, we analyzed 300 samples with impressive classification precision across several classes. The results of the XGBoost model, notably with a maximum depth of 10,500 estimators and a learning rate of 0.01, reveal classification accuracies for forest at 91% and built-up areas at 89%. Though overall accuracy was 75.56%, the findings pinpoint the need for improvements in urban classification, where precision and recall were lower at 63.64% and 58.33%, respectively, due to spectral overlaps. The analysis indicates significant land cover distribution, where forests dominate 37.4% of the area, highlighting their ecological importance. Water bodies account for 25.6%, showing their vital role in local hydrology and biodiversity support. Barren land comprises 15.4%, potentially prospects for development or restoration. Agricultural and built-up areas represent modest portions at 5.8% each, reflecting minimal urban and farming activities. Swamp vegetation holds 10%, emphasizing crucial biodiversity zones. Furthermore, for thermal dynamics, the study utilized MODIS Terra and Aqua datasets, measuring land surface temperature (LST) and the Urban Heat Island (UHI) effect. The data indicated notable temperature changes influencing the microclimate, especially during extreme weather conditions. Landsat-8 imagery's high resolution facilitated detailed mapping of thermal variations, paired with NDVI to assess vegetative health consistently seen as positive and signaling its continued resilience against seasonal environmental stresses. The study also incorporated precipitation data from CHIRPS, indicating declining annual rainfall, which could contribute to increasing thermal pressure within the wetland region. This precipitation shortage might elevate the UHI and induce further stress on local ecosystems. Our feature importance analysis highlights the spectral band B12 as crucial for distinguishing built-up and agricultural areas, aligning with existing research. This insight could direct future enhancements in land classification models, particularly for areas with spectral similarity issues. The CNN-based models also played a significant role, with detailed maps showing forest coverage of 550 km², water bodies at 350 km², and barren lands at 200 km², further validating the XGBoost findings. For instance, vegetation indices during the dry season showed a significant decrease, affecting both carbon sequestration rates and habitat availability for species during critical breeding periods. These findings resonate with the Sustainable Development Goals (SDGs), particularly SDG 6 (Clean Water and Sanitation), highlighting the interplay between water management and ecosystem integrity; SDG 13 (Climate Action), emphasizing the need for resilience-building measures to adapt to climate variability; and SDG 15 (Life on Land), which advocates for the conservation and sustainable use of terrestrial ecosystems. The study's outcomes stress the urgent need for integrative management approaches that balance ecological preservation with developmental pressures. Emphasizing the implementation of sustainable urban planning and the restoration of degraded habitats can help mitigate the negative impacts of rapid urbanization. Community engagement and education on the ecological and economic benefits of wetlands can also foster stewardship and conservation efforts. Finally, this research not only maps and analyses the transformation of Haiderpur Wetland over 34 years but also highlights the role of advanced analytics in ecological studies. By integrating CNNs, XGBoost, and VGG16, this study sets a precedent for future research in landscape dynamics, providing a model for assessing other critical ecosystems worldwide. The findings underscore the need for policy interventions that prioritize wetland preservation, aligning with international conservation goals, and protecting natural capital for future generations. Overall, it serves as a call to action for stakeholders at all levels to recognize and reinforce the essential value of wetlands in our global ecological continuum. In conclusion, the data underscores the importance of conservation for areas like Haiderpur Wetland, where forests and water bodies serve critical ecological functions. The study’s models and methodologies set the stage for better management practices, offering insights that guide conservation, urban planning, and sustainable resource management efforts. Collaborative approaches involving policymakers and stakeholders are essential to address environmental challenges and ensure a balanced coexistence between ecological preservation and developmental activities. Keywords: Google Earth Engine (GEE), Land Use and Land Cover (LULC), Landsat Imagery, Land Surface Temperature (LST), Urban Heat Island (UHI), Haiderpur, Sustainable Urban Planning, Modified Normalized Difference Water Index (MNDWI), Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), Random Forest, Sustainable Development Goals (SDG: 2, 6, 13, 15).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring Peatland Dynamics over Agricultural Areas in Estonia using Sentinel-1 SAR data

Authors: Stavroula Kanakaki, Juan M.
Affiliations: European Commission, University of Alicante
The scope of the current study is to monitor and analyze peatland regions in Europe. Peatlands play a significant role in environmental sustainability. Restoration of these ecosystems is crucial due to their unique characteristics, which contribute to the preservation of this valuable ecosystem. It is well documented that peat can retain up to 20 times its weight in water, moderating the flow of water through the landscape and enhancing resilience to extreme weather conditions. Additionally, peatlands contribute to improved water quality and reduced flood risk. In addition, peatlands are also critical habitats for wildlife, they act as global carbon stores, and provide services such as drinking water filtration, flood prevention, and historical archives. They are also utilized for grazing and recreational activities. Notably, peatlands are among the most carbon-rich ecosystems on Earth, storing twice as much carbon as the world's forests, underscoring their importance not only as ecological treasures but also as crucial elements in global carbon management. In the frame of the EU’s Biodiversity Strategy for 2030, among the planned actions is the launch of an EU nature restoration plan. This plan will engage EU countries in making concrete commitments and undertaking actions to effectively restore degraded ecosystems. These efforts will focus particularly on ecosystems with the greatest potential for carbon capture and storage, and on those that can help prevent and mitigate the impact of natural disasters. Monitoring agricultural and environmental policies help assess the impact of the Common Agricultural Policy (CAP). Typically, the Integrated Administration and Control System (IACS) is responsible for farmers submitting their CAP payment applications online as mentioned by the European Parliament. National authorities then verify farmers' compliance with conditions for receiving these payments. Additionally, EU countries utilize IACS to ensure farmers respect the CAP conditionality, which includes statutory management requirements (SMRs) and good agricultural and environmental conditions (GAECs). Consequently, the IACS is a significant and valuable system to monitor the CAP performance. As mentioned above, to receive direct payments, farmers have to implement standards for good agricultural and ecological conditions (GAECs) under “conditionality”, including one for the protection of wetlands and peatlands (GAEC 2). While GAEC 2 aims to protect carbon rich soils, the actual requirements remain weak: there is no obligation to halt or reverse degradation, and Member States can ask to delay the implementation of GAEC 2 until 2025. Overall, countries lack strong action to safeguard peatlands through GAEC 2, and insufficient data and mapping of peatlands are often named as barriers to the early implementation of GAEC 2. Although often overlooked, peatlands deliver important ecosystem services for humans, nature and the planet and time is pressing to ensure that peatlands are adequately protected, restored, and sustainably managed, as noted in the report by the European Environmental Bureau (2022). Within the framework of IACS Data Sharing, which tries to gather as much information as possible on agriculture in the European Union, this study aims to demonstrate the significance of IACS data in supporting the CAP policy, specifically focusing on peatland mapping and monitoring. At European Union level, data availability varies by country, leading to distinct processing approaches at each Member State. In Estonia, multiple datasets have been employed to assess whether IACS spatial data, alongside other complementary data, can aid in peatland mapping and monitoring. This includes agricultural parcels data annually declared by farmers for subsidy retrieval, which spatially describes each parcel (polygon) and its crop type. This study focuses on northern Europe, which is richer in peatlands compared to southern regions. As mentioned before, data pertaining to agricultural parcels and crop type information were utilized in conjunction with peatland maps. This integration facilitated a comprehensive analysis of areas rich in peatland, aiming to accurately map these regions. Challenges in data availability significantly impact the methodology employed in peatland research. The main challenge is that while there are numerous time-series datasets related to agricultural activity (for many Member States), information on peatlands typically lacks periodic updates. Often, there is only a single map available, which may or may not be recent, failing to meet the needs for multi-temporal studies. In this investigation, data pertaining to agricultural parcels and other agricultural activities were utilized in conjunction with peatland maps. This integration facilitated a comprehensive analysis of areas rich in peatland, aiming to accurately map these regions. Additionally, the study incorporates the soil map of Kmochet al. (2021), explicitly isolating the peatland layer of Estonia. The agricultural parcels describing the type of crop were analyzed, as well. Finally, satellite data, specifically Sentinel-1A Single Look Complex (SLC) images acquired in Interferometric, Wide Swath Mode (IW) with dual polarization (VV and VH), sensitive to vegetation and soil moisture levels, were exploited to enhance the understanding of peatland dynamics. Regarding the agricultural parcels, they were overlaid with the peatland areas to create three different classes: A) agricultural parcels within peatland zones, B) agricultural parcels outside of peatland zones, and C) pure peatland zones without including any agricultural land. These three classes were studied using satellite data to extract the signal characteristics of each zone separately. For each polygon, the mean value and standard deviation of various SAR features were calculated. This data was used to generate a single curve for each type of crop within these categories, e.g. by visualizing the backscatter values for both VV and VH polarizations. In addition, to aid in the interpretation of the graphs, daily meteorological data on precipitation for the study area were included as inputs. The graphs were analyzed in order to identify specific patterns, differences and similarities among various types of land use. Additionally, the study examines whether fluctuations in values, such as drops or peaks, correlate with specific weather events or seasonal transitions (e.g. spring to summer), which may affect vegetation moisture and structure. REFERENCES European Parliament, “Direct payments,” accessed November 28, 2024, https://www.europarl.europa.eu/factsheets/en/sheet/109/first-pillar-of-the-common-agricultural-policy-cap-ii-direct-payments-to-farmers. European Commission, “Biodiversity strategy for 2030,” accessed November 28, 2024, https://environment.ec.europa.eu/strategy/biodiversity-strategy-2030_en?wt-search=yes. European Environmental Bureau, “Peatlands and wetlands in the new CAP: too little action to protect and restore,” April 2022, https://eeb.org/wp-content/uploads/2022/04/Briefing-Peatlands-and-Wetlands-No-Branding.pdf. Kmoch A., Kanal A., Astover A., Kull A., Virro H., Helm A., Pärtel M., Ostonen I., Uuemaa E. “EstSoil-EH: a high-resolution eco-hydrological modelling parameters dataset for Estonia.” Earth System Science Data 13 (2021): 83-97.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping invasive Prosopis spp. and native wetland vegetation communities in Point Calimere Ramsar Site using Sentinel-2 multiseasonal spectral temporal metrics

Authors: Dr Arasumani Muthusamy, Mr Kumaresan M, Prof Balasubramanian Esakki
Affiliations: Sathyabama Institute of Science and Technology, National Institute of Technical Teachers' Training and Research
Native species in coastal wetland ecosystems face an increasing threat from invasive plants. In India, the Point Calimere Ramsar Site's coastal tropical dry evergreen forests, grasslands, and mangroves are being adversely affected by Prosopis species. This invasion poses a significant risk to numerous avian, mammalian, and amphibian species that depend on these habitats. To restore wetland ecosystems and mitigate further invasions, it is imperative to monitor and track invasive species. This investigation examined the utilization of multi-season Sentinel-2 Spectral Temporal Metrics (STM) for mapping coastal native and non-native vegetation communities. The study employed summer, monsoon, and post-monsoon season datasets with Support Vector Machine (SVM) classification on the Google Earth Engine (GEE) platform. Results indicated that the combination of summer and post-monsoon Sentinel-2 spectral-temporal metrics yielded the highest accuracy (94% overall) for mapping Prosopis, tropical dry evergreen forests, and coastal grasslands. The monsoon dataset proved most effective for mapping mangroves. However, utilizing the entire season's spectral temporal metrics produced the most favorable average results across all land cover classes. The study also analyzed Prosopis distribution and fragmentation within various landscapes of the Ramsar site using Fragstats. Findings revealed that Prosopis is extensively distributed throughout the Point Calimere Wildlife Sanctuary, presenting a substantial threat to local wildlife. We anticipate that their map will be utilized for ongoing Prosopis removal efforts at the study site. This comprehensive approach demonstrates the potential for monitoring Prosopis and native vegetation in coastal tropical wetland habitats using Sentinel-2 STM.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: B.04.01 - POSTER - Satellite based terrain motion mapping for better understanding geohazards

Better understanding geohazards (such as landslides, earthquakes, volcanic unrest and eruptions, coastal lowland hazards and inactive mines hazards) requires to measure terrain motion in space and time including at high resolution with multi-year historical analysis and continous monitoring. Several EO techniques are able to contribute depending on the context and the type of deformation phenomena considered and some techniques can provide wide area mapping (e.g. thanks to Sentinel-1). Advanced InSAR or pixel offset tracking using radar imagery including newly available missions with different sensing frequency (e.g. L Band) can help provide relevant geoinformation. This is also the case with optical streo viewing technique and optical correlation techniques including for wide area mapping. There is a need to assess new EO techniques to retrieve such geoinformation both locally and over wide areas and to characterise their limitations. New processing environments able to access and process large datastacks have allowed to increase user awareness, acceptance and adoption of EO and have created opportunities for collaboration including co-development and increased combination of data sources and processing chains. With this in mind it is needed to understand what is the agenda of geohazard user communities and what are the barriers for reaching their goals.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Identifying Deformation Onset Timing at Socompa Volcano, Chile, Using Breakpoints in INSAR Time Series

Authors: Benjamin Kettleborough, Dr John Elliot, Susanna Ebmeier
Affiliations: COMET, School of Earth and Environment, University of Leeds
Sudden onsets or changes in deformation at an active volcano are important targets for volcano monitoring. Pressure changes from increased magmatic activity or gas build up and release can be an early indicator for volcanic eruptions or give insight into the subsurface magmatic system. For inaccessible or poorly monitored volcanoes, satellite based Interferometric Synthetic Aperture Radar (InSAR) provides an ideal tool to monitor this deformation remotely. The availability of ESA’s systematically acquired Sentinel-1 imagery has allowed near-global automatic processing and analysis of on-shore deformation since its launch in 2014. This has facilitated the detection of deformation at volcanoes with historical records of eruption or unrest, enabling early warnings of eruptions or greater understanding of the deformation mechanism and, hence, the hazard a volcano poses. Here we explore breakpoint analysis as a tool to automatically identify changes from baseline deformation behaviour. A breakpoint is the point at which a time series breaks from the status quo, whether through a gradient change or discontinuity. Furthermore, breakpoint analysis can allow for the retrospective identification of the timing of deformation onset to help understand the potential cause of deformation and its possible triggers. For example, breakpoint analysis has been recently used to investigate volcano-volcano interactions in Iceland and, alongside analysis of cross correlation between thermal and deformation data, has been used to identify causes of inflation at Domuyo volcano, Argentina. Here, we apply breakpoint analysis to the first geodetically recorded deformation at Socompa Volcano, Chile. Socompa is a stratovolcano at the eastern edge of the Atacama Desert, on the border with Argentina. Socompa’s last recorded eruption was 7,200 years ago and it has been assumed to be quiescent with no evident deformation over the prior 28 years. There is, however, evidence of magmatic activity with hotspots near the summit, fumaroles, and hot springs at Socompa Lagoon to the South and on the Quebrada del Agua fault to the east. In 2020 there was a relatively deep (112 km) interslab magnitude 6.8 earthquake 126 km away to the North. Around the same time, Socompa started uplifting at a rate of 17.5 mm/yr, having been undeforming in InSAR studies coving Socompa since 1992. Previous published breakpoint analysis using Markov Chain Monte Carlo Methods (MCMCs) to fit to a linear piecewise model to Global Navigation Satellite System (GNSS) data found that the uplift at Socompa originated in November 2019, 197 ± 12 days before the magnitude 6.8 earthquake. There has since been a magnitude 7.4 Earthquake in 2024 151 km away, which we will use to investigate any change in Socompa’s deformation. We are establishing alternative methods including Bayesian Changepoint Detection and the Autoregression-based Change Finder methods, which we will test, in terms of accuracy and sensitivity, with existing approaches. These are promising because, whilst preserving a distribution of possible breakpoints as with MCMCs, allowing for knowledge of uncertainty, they are computationally less intensive. Additionally, they do not require an exact parametrised prior model allowing for application to separate pixels without being constrained by assumptions about the deformation history. This permits the full exploitation of InSAR observations to gain greater insight into the spatial-temporal variations in deformation. These methods will be used to investigate and confirm the onset time of deformation for the 2019/20 onset as well as detecting any possible deformation change occurring after the 2024 earthquake. Further, we plan to use independent remote sensing datasets to investigate any changes in hotspot temperatures and edifice-wide median thermal anomalies. This is to pin-point the timing of the activation of any magmatic systems and to use the correlation and lag between deformation and thermal data to investigate the mechanism of deformation. We plan to apply these methods to the deformation of volcanoes in the automatically processed COMET LiCSAR portal (https://comet.nerc.ac.uk/comet-volcano-portal/) and investigate spatial and temporal links to nearby volcano-tectonic events for the region.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Use of InSAR Data to Identify Areas at Risk of Continuous Deformations Throughout the Country of Poland

Authors: Maria Przyłucka, Michalina Cisło, Zbigniew Perski
Affiliations: Polish Geological Institute - National Research Institute, Polish Geological Institute - National Research Institute
Poland is a country with relatively stable geology, yet it is still exposed to geohazards such as landslides, ground subsidence and uplift associated with mining activities and changes in hydrogeological conditions of the subsoil, as well as floods and induced seismic events. As the country's land area is large (312 696 km2), remote sensing methods are a useful tool for detecting and monitoring changes in the land surface. The advancement of techniques and the increased availability of satellite data over the past decade have opened up new opportunities for the analysis and identification of geological hazards. In our work, we will demonstrate the application of the InSAR satellite interferometry technique for large-scale and long-term geohazard analyses. The collected SAR satellite data from the Sentinel-1 mission enabled the identification of all locations with active continuous deformations occurring across the country. The study involved a geostatistical analysis of over 4 million PSI points and the identification of more than 300 areas affected by ground motions. Based on these analyses, regions where deformations are most significant on a national scale were identified, with the majority linked to the mining of mineral deposits. For selected critical areas: Upper Silesian Coal Basin, Lublin Coal Basin and Legnica-Głogów Copper Belt, additional analyses and InSAR processing using SAOCOM satellite data were carried out to extend the basic information and deepen the research results. Deformations occurring there reach values of decimetres per year, often exceeding a metre in total, therefore ground movements in the central parts of mining basins are not identified by the PSI technique. The complementary use of PSI and DInSAR techniques overcame this limitation. By complementing the information with DInSAR processing and analyses of large-scale subsidence caused by years of underground mining, more reliable deformation maps were generated, enabling a better assessment of the actual impact of mining activities on the surface. Such analysis performed for the most endangered areas of the country, against the background of a large-area analysis, allowed for a comprehensive characterisation of ground movements occurring in Poland. The work demonstrates the usefulness of SAR satellite data to support geohazard monitoring over wide areas, including the scale of an entire country such as Poland. The extensive spatial coverage of remote sensing observations provides access to high-risk areas as well as to regions that are otherwise difficult to monitor. The undeniable potential of satellite data has made it possible to uniquely identify all sites of continuous deformation, assess the effects of these movements, identify areas of key importance and initiate predictive analyses to identify areas potentially at risk in the future.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhanced Atmospheric Correction of InSAR Data Using Variable Tropospheric Layer Heights and Multi-Source Global Ionospheric Maps

Authors: Reza Bordbari, Andy Hooper, Professor Tim Wright, Yasser Maghsoudi
Affiliations: University Of Leeds, University of Exeter
Atmospheric effects significantly impact the accuracy of Interferometric Synthetic Aperture Radar (InSAR) measurements, necessitating precise corrections for both ionospheric and tropospheric de- lays. This study builds upon existing methods for calculating tropospheric delays directly in the line- of-sight (LOS) direction, introducing the novel concept of variable maximum tropospheric heights to enhance accuracy. By dynamically defining the upper limit of the tropospheric layer, this approach improves tropospheric delay estimation. Additionally, corrections for ionospheric delays are refined using global ionospheric maps (GIMs) from multiple analysis centers. Tropospheric delays in the slant-range direction were estimated using the ERA5 global reanaly- sis dataset from the European Centre for Medium-Range Weather Forecasts (ECMWF), which pro- vides a spatial resolution of 0.25° and hourly temporal sampling. While previous methods calculate zenith path delays first and then map them to the LOS direction geometrically (e.g., GACOS), such approaches can introduce biases under spatially anisotropic atmospheric conditions, particularly at large incidence angles. In this study, we processed ERA5 data to estimate slant delays directly along the LOS direction. Atmospheric parameters from ERA5 were interpolated temporally and spatially, with cubic splines applied vertically across 37 pressure levels to ellipsoidal heights. Slant-range de- lays were calculated by numerical integration along the LOS at 50-meter intervals, considering up to 70 different tropospheric layer heights ranging from 4 km to 40 km. The ionosphere extends roughly from an altitude of 60 to 1500 km, with a maximum electron con- centration at around 450 km. The total electron content (TEC) of the ionosphere varies with altitude, geographic location, time of day, season, and geomagnetic and solar activity. To address these varia- tions, global networks of permanent IGS GNSS stations are utilized to generate maps and provide TEC estimates. For this study, we employed eight vTEC products: IGS (International GNSS Service), CAS (Chinese Academy of Sciences), CODE (Center for Orbit Determination in Europe), ESA/ESOC (Euro- pean Space Agency/European Space Operations Centre), UPC (Universitat Politècnica de Catalunya), NRCan (Natural Resources Canada), and JPL (Jet Propulsion Laboratory) low- and high-resolution products. These products differ in estimation techniques, spatial resolutions, and temporal sampling rates. Additionally, different rescaling factors, ranging from 0.75 to 1, were analyzed to evaluate the proportion of vTEC to consider in ionospheric delay calculations. The methodology is applied to full-resolution and multi-looked Sentinel-1 SAR data over the Antarc- tic Peninsula and West Turkey test sites, demonstrating its effectiveness in mitigating atmospheric artifacts. Results indicate that incorporating variable tropospheric heights for slant-range delay esti- mation and leveraging multi-source GIMs provides a robust framework for improving the precision of InSAR measurements in diverse geophysical applications. By applying this approach, the averaged standard deviation of unwrapped interferograms decreased by 28% and 30% for tropospheric and ionospheric effects corrected interferograms, respectively.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Validation of ICEYE PS-InSAR Using Induced Nonlinear Deformation of Corner Reflectors

Authors: Anurag Kulshrestha, Dr. Valentyn Tolpekin, Mr. Michael Wollersheim, Dr. Qiaoping Zhang
Affiliations: ICEYE Oy
Persistent Scatterer Interferometry (PSI) for surface subsidence monitoring has been well established for spaceborne SAR missions, like ERS, ENVISAT, and Sentinel-1. New space SAR missions have yet to demonstrate precise deformation estimations using PSI techniques. In this endeavour, a campaign was set up to assess the accuracy of estimating surface deformations induced on corner reflectors using ICEYE Ground Track Repeat (GTR) SAR satellites. To facilitate this experiment, we produced a set of specialized, custom-designed corner reflectors that can be manually adjusted to induce vertical movement with a precision of one-eighth of a millimeter. In this campaign, four of those corner reflectors were set up in Calgary, Canada within an area of ~750 square meters. Non-linear deformation trends were induced on three of them while the fourth one was kept as a control. On the first reflector, we induced a periodic function modulated over a linear deformation velocity. This periodic pattern simulates deformations that occur mainly due to periodic thermal expansion or groundwater variation. On the second reflector, we induced a breakpoint model with three piecewise linear velocities. This simulates sudden changes in deformation velocities that have been observed to occur before a mining shaft collapses. On the third reflector, we induced a Heaviside model with three discontinuities occurring over the ground surface. Such discontinuities indicate pre-hazard precursory deformation patterns that can be used to flag impending hazards. These discontinuities were also used to test the phase unwrapping error limits during the PSI processing. The daily reflector adjustment campaign was carried out from November 22, 2023 until March 29, 2024, lasting for a total duration of 129 days. During the adjustment campaign, modelled deformation values were induced on the corner reflectors in the vertical direction. In addition, the local temperature was noted to account for any thermal expansion effects. ICEYE’s Spotlight Extended Dwell (SLED) mode images were taken over the reflector area with an almost daily revisit rate. The perpendicular baselines varied within an orbital tube of radius ~4.5 km. The image stack was coregistered and PS-InSAR processing was performed over a patch of ~330 m in azimuth and ~400 m in range direction. A total of 139 PS points were selected including the pixels over the four corner reflectors. The point over the control corner reflector was chosen as the reference point. A periodogram model was then used to estimate the height and non-linear deformation estimates over the points. The deformation time series over the PS points was then compared with the induced deformation values. To assess the accuracy, the correlation coefficient and the root mean squared error (RMSE) between the induced and PSI estimated deformation time series were calculated. For the periodic model, the correlation coefficient was 0.99, and the RMSE was 1.57 mm. For the breakpoint model, the correlation coefficient was 0.84 and RMSE was 1.23 mm. For the heaviside model, we observed that discontinuities under the unwrapping error limit of a quarter of X-band radar wavelength were unwrapped correctly. However it was challenging to unwrap the discontinuities beyond that limit. The correlation coefficient was 0.69 for this case. In conclusion, the experiment showed a high degree of accuracy for even non-linear deformations, however accuracy was impacted when discontinuities exceeded the unwrapping error limit, which is expected and was the purpose of that part of the experiment. To the best of our knowledge, this was the first attempt to validate PS InSAR results by inducing non-linear deformation trends over corner reflectors. This experiment establishes that ICEYE data is verified for precise deformation estimation using PS InSAR techniques.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Decade-Long Ground Deformation Analysis from Urban Expansion to Geological Influences Using Sentinel-1 PSI in Cluj-Napoca, Romania

Authors: Péter Farkas, Gyula
Affiliations: Geo-sentinel Ltd
Assessing Natural and Anthropogenic Ground Deformation Using Sentinel-1 PSI in the Region of Cluj-Napoca, Romania Continuous analysis of ground deformation is crucial for assessing natural hazards and monitoring human-induced activities. This study presents the results of a Persistent Scatterer Interferometry (PSI) analysis of ground deformations in the Cluj-Napoca region, Romania. Cluj-Napoca, the second most populous city in Romania, is situated in a hilly environment on the banks of the Someșul Mic River, making it ideal for such an assessment. Over the past few decades, the city's urbanization has rapidly progressed, more than doubling its area in 30 years. The city's expansion has reached neighboring hills with slopes up to 26% steepness, which are prone to landslides. The PSI analysis was conducted using over 10 years of Sentinel-1 descending data via the Interferometric Point Target Analysis module of the Gamma software. For interpretation, we integrated local geological information and included a geotechnical perspective. A thorough analysis is necessary due to the presence of various types of deformations, often superimposed, related to mass movements, groundwater pumping, sediment compaction, industrial operations, mining, and earthworks related to road construction. The results are expected to show significant movements in recently built areas at the city's edges, often caused by the combined effects of anthropogenic activities and geological conditions. This study underscores the necessity of local studies, as country and continent-wide maps, while useful for large-area mapping, may not provide the same level of detail and specificity. By using locally selected references and adjusting parameters to the research goals, our analysis is more up-to-date and tailored to the region and user needs. Furthermore, our detailed analysis, involving local knowledge, experts, and auxiliary data, provides valuable information regarding the risks, interpretation, origin, and characterization of detected movements. This demonstrates the importance of collaboration between remote sensing and local geotechnical experts to maximize the potential and effectiveness of InSAR data. Accurately mapped and quantified ground deformations can enhance the understanding of geological processes and assess the risks associated with urban development in the area. Detected slope instabilities, subsidence, or uplift can significantly impact the built environment and should be considered in the planning and design of new buildings and infrastructure.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Ground Deformation Detection and Risk Information Service for Slovenia

Authors: Mateja Jemec Auflič, Karin Kure, Ela Šegina, Krištof Oštir, Tanja Grabrijan, Matjaž
Affiliations: Geological Survey Of Slovenia, University of Ljubljana, Faculty of Civil and Geodetic Engineering, GeoCodis Ltd
Effective approaches to reducing landslide risk include the development of methods to identify landslide-prone areas and the development of risk reduction concepts to mitigate the effects of landslides in these areas. Among available techniques landslide monitoring present a mandatory step in collecting data on landslide conditions (e.g., areal extent, landslide kinematics, surface topography, hydrogeometeorological parameters, and failure surfaces) from different time periods and at different scales, from site-specific to local, regional, and national, to assess landslide activity. In 2023, severe rainfall events triggered more than 8000 landslides in Slovenia. Most of these landslides are categorized as shallow landslides, soil slips, and landslides that primarily caused damage to buildings, infrastructure and agricultural land. Among the numerous registered landslides, there are also landslides with a volume of more than one million m3, which, in addition to the damage to buildings, also endangered the lives of hundreds of people and even claimed human lives. The EO4MASRISK project is to fully utilise Sentinel-1 data, evolving from periodically updated ground deformation maps to early mapping and monitoring of landslide activity to increase urban resilience. Optical-based techniques will enable to better understand the extent and hazard of landslides. The priority is to support landslide inventory using the mCube service DisMapper and the GEP-based ALADIM service. The main reason is to use high-resolution optical data (Sentinel-2, Landsat, and Planet data) to map numerous landslides triggered by the 2023 floods and to monitor significant changes in landslides in release areas. EO4MASRISK service will help stakeholders and end-users to easily identify landslide moving areas and related potential impacts on built-up areas. The EO4MASRISK service functionality will provide the following information: (1) Ground deformation time series; (2) Ground deformation yearly velocity map; (3) Landslide activity map (three levels, e.g., low, medium, high); (4) Map of vulnerable elements at risk, e.g. buildings and infrastructure (three levels e.g. low, medium and high).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Satellite and terrestrial L-band radar interferometry in Alpine environment: insights from slope instabilities in Val Canaria (Switzerland)

Authors: Alessandro De Pedrini, Christian Ambrosi, Andrea Manconi, Dr. Prof. Federico Agliardi, Rafael Caduff, Othmar Frey, Philipp Bernhard, Tazio Strozzi
Affiliations: University of Applied Sciences and Arts of Southern Switzerland SUPSI, Federal Institute of Technology Zurich ETH Z, WSL Institute for Snow and Avalanche Research SLF, University of Milano Bicocca, Department of Earth and Environmental Sciences, Gamma Remote Sensing AG
C-Band satellite radar interferometry is commonly employed for regional assessments due to its effectiveness in detecting surface deformation over extensive areas at relatively low cost. However, it faces limitations, such as difficulty in capturing large or rapid displacements and reduced resolution in forested regions, primarily due to the satellite's revisiting time and the sensor's wavelength. Recent studies using L-Band satellite radar data, including ALOS-2 PALSAR-2 and SAOCOM-1, showed how some of these challenges can be overcome. Moreover, alternative platforms like airborne or terrestrial radar systems offer flexible survey planning tailored to specific site conditions. As part of the MODULATE project (Monitoring Landslides with Multiplatform L-Band Radar Techniques), under the ESA’s Earth Observation Science for Society program, we used the GAMMA L-Band SAR system mounted on a car, to detect and measure surface displacements in the Val Canaria (Canton of Ticino, Switzerland) by means of repeat-pass SAR interferometry. This region is highly prone to rock slope failures on both valley sides, with an overall estimated volume of 80 million m³. The valley has experienced significant collapses, such as on 27th October 2009, where a 380’000 m³ collapse partially dammed the Canaria River. Another large-scale failure on the right side, near the locality of Rütan dei Sass, poses a threat of damming the river, potentially causing a flooding wave that could impact the A2 highway, one of the main north-south viable infrastructure through the Alps. The car-borne interferometric measurements at L-band, revealed surface displacements of up to 10 cm between July and September 2024, clearly highlighting the most active areas of the slope. Persistent Scatterer Interferometry (PSI) from various satellite constellations (including Sentinel-1, Radarsat-2, TerraSAR-X, ALOS-2 PALSAR-2, and SAOCOM-1) was unable to detect these movements due to their rapid and irregular behavior. However, Small Baseline Subset (SBAS) processing of SAOCOM-1 images provided displacements compatible to the car-borne results. Our findings highlight the performance, versatility, and high-quality of L-band SAR data obtained from different platforms and prove that their use is a valid alternative to monitor fast-evolving landslides in forested slopes. Our findings can contribute to the effective planning and use of upcoming L-band SAR missions (e.g. NISAR and ROSE-L) for landslide monitoring services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Application of L-band SAOCOM-1 satellite data for sinkhole formation research

Authors: PhD Wojciech Witkowski, Artur Guzy, Magdalena Łucka, Xingyu Zhang
Affiliations: AGH University of Krakow
Sinkholes are a type of discontinuous ground surface deformation that occurs on a macroscopic scale all over the world. They are estimated to affect roughly 20% of the global population and this type of phenomenon could have occurred naturally or as a result of human intervention in the orogen's initial state of equilibrium. The emptiness of the orogen in the second case is directly related to human activity, such as catacombs or mining. Simultaneously, the energy transition processes of many countries result in the closure and flooding of underground coal mines, resulting in an increasing number of observed discontinuous deformations such as sinkholes. In these areas, the land surface is frequently highly urbanised. Therefore, sinkholes pose a direct threat to human life and negatively impact surface infrastructure. In this study, we investigated the potential of using relative new SAR data from SAOCOM mission to monitor land surface movements in areas affected by sinkholes. The usefulness of L-band data was validated in the region of highest risk of discontinuous deformations. There were 12 acquisitions available from the period from 05.2023 to 11.2023. For comparison purposes, data from the Sentinel-1 mission were also analyzed for the same time period. Our study was conducted in the region of underground mine ZGH “Bolesław” located close to the city of Olkusz in Poland. The mine has finished mining at the end of 2020. One year later in 2021 the pumps dewatering the rock mass were turned off, which began the process of rebuilding the water table. On the one hand, this started the process of uplifting of the ground surface, but at the same time, the formation of discontinuous deformations intensified. Our research concerned specifically small-scale deformations related to the sinkhole formation process. The study focused on the analysis of two aspects related to satellite radar interferometry (InSAR). The first was the analysis of signal coherence for the L-band and C-band and the second was the possible detection of the deformation scale. Coherence analysis for L-band was performed for multi-looking with factors 1x2 and 3x6. For the 16 days and 32 days time bases, the average coherence was observed to be higher than in the case of the results for the Sentinel-1 data. All the average values for the L-band were above 0.5, while for the C-band this was the case only for single periods. The second aspect of the research was to analyse ground deformation field using the small baseline subset (SBAS) approach. The obtained displacement velocities ranged from -65 mm/year to + 45 mm/year. The results for the L-band data showed groups of points with a significant deformation signal that strongly correlates with the location of zones of possible occurrence of discontinuous deformations. In general, SAOCOM-1 dataset can be effectively used for monitoring land surface movements related to sinkhole formation. At the same time, the results obtained from the L-band with higher spatial resolution confirm the movement patterns obtained from the C-band, which has lower spatial resolution of information. Our study analysed the new SAR L-band sensor to determine potential ground surface movements resulting from local discontinuous deformations. The obtained results could be used for future research and may find application in engineering practice and risk management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring of flood protection systems with InSAR in Austria

Authors: Vazul Boros, Maciej Kwapisz, Petr Dohnalík, Philip Leopold, Alois Vorwagner, Antje Thiele, Madeline Evers
Affiliations: Austrian Institute Of Technology, Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB
Sustainable natural hazard management is essential to promote the energy and mobility transition as well as the circular economy for urban areas. Urban centers, which are often located near rivers, are particularly at risk of flooding. The occurrence and intensity of this natural hazard is constantly gaining importance due to climate change and increasingly frequent heavy rainfall events. Since the catastrophic floods of 2002 in Austria, a shift towards integrated flood risk management has been observed in the way floods are dealt with, in which the issues of technical flood protection are supplemented by the observation and monitoring of existing protection measures. The importance of flood protection for the region has once again been highlighted by the centrale European floods in September 2024, which were caused by the heavy rainfalls generated by the storm Boris. The aim of the HoSMoS (HochwasserSchutz Monitoring via Satelliten) research project, which is sponsored by the Austrian Space Applications Programme (ASAP) of the Austrian Research Promotion Agency (FFG), is to investigate the potential offered by monitoring via satellites for flood protection systems [1]. The project is implemented by the Austrian Institute of Technology, with the assistance of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB from Germany. Our key partner is the viadonau, which as the managing body of the Danube Flood Control Agency is responsible for flood protection along the Danube in the area of Krems and from Stockerau-Zeiselmauer to the Austrian state border. Based on multitemporal interferometric radar with synthetic aperture (InSAR), long-term deformations on the earth's surface can already be monitored under certain conditions. The special feature of this remote sensing method is not only the fact that no sensors need to be attached to the structure, but it also offers the unique possibility to analyze data retrospectively, e.g. for Sentinel data back to 2015. The accuracies currently achieved with InSAR are sufficient for monitoring trends of mass movements or glacier retreat, for example. There are promising results for the use of this technology in the monitoring of bridges, where the accuracy could be increased significantly by compensating for environmental conditions [2]. Currently, the condition of flood protection structures is monitored by means of "close-up inspection" by conducting geodetic surveys with theodolites and personnel along the dams. Only in rare exceptional cases are fully automated total stations with installed prisms or locally referenced GNSS sensors permanently surveyed. Initial investigations into the use of drones for surveying dams have also revealed various limitations, turning these methods uneconomical. The innovation of the HoSMoS project consists of investigating the fundamental applicability of the InSAR technology for flood protection. The aim is to investigate whether monitoring by satellites is possible in principle under the special circumstances that typically prevail at such structures. For example, the influence of natural vegetation, construction materials, the presence of roads and paths and the orientation of linear structures on the satellite monitoring are to be investigated. The accuracy achievable with InSAR is to be compared with the requirements for monitoring. Seasonal effects and relevant environmental conditions that require compensation are to be identified. In the long term, monitoring by satellites promises a great potential for flood protection. It would enable the simultaneous monitoring of deformations for many different structures in a large territory, with a higher temporal and spatial resolution than is currently possible. Long-term trends may be recognized through the retrospective evaluation of deformations. The definition of warning thresholds would allow the rapid, systematic identification of potentially critical areas and sections that require closer monitoring or inspection. References [1] https://projekte.ffg.at/projekt/5123067 [2] Vorwagner, A., Kwapisz, M., Leopold, P., Ralbovsky, M., Gutjahr, K.H. and Moser, T. (2024), Verformungsmonitoring von Brücken mittels berührungsloser Satellitenradarmessungen. Beton- und Stahlbetonbau, 119: 636-647. https://doi.org/10.1002/best.202400017
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: On the importance of large-scale, continually updated InSAR datasets for geohazard monitoring and mitigation

Authors: Karsten Spaans, Andrew Watson, Sarah Douglas, Tom Ingleby
Affiliations: Satsense Ltd
The improved quality and availability of ground movement data in the last decade has highlighted the exposure of infrastructure to deformation-linked geohazards. Founded in 201 8, SatSense has created large scale ground movement products, covering e.g. the United Kingdom and New Zealand at country scale with Sentinel-1 data, and region-wide products using higher resolution TerraSAR-X and Cosmo-SkyMed data. This data is pre-processed and continually updated, allowing unprecedented levels of network-wide monitoring of deformation-linked geohazards. Here, we provide an overview of the SatSense datasets, and explore how this data is used to analyze deformation hazard for a variety of infrastructure and asset monitoring applications. Our datasets across the UK and New Zealand capture deformation from a multitude of geohazards (e.g. volcanic, tectonic, mining-related, shrink-swell, and aquifer discharge) that can affect structures and infrastructure. Landslides are of particular importance for the rail and road networks, with direct damage due to movements of the track or road, and deposition of material due to slides on neighbouring slopes and embankments. When monitoring infrastructure networks, the sheer vastness of the data tends to overwhelm clients and partners alike, making analysis and interpretation difficult. To make the data more accessible, we have developed methods to condense millions of ground movement datapoints into easily interpretable products, to identify key areas of concern, and to aid non-specialist customers in working with our data. For property applications, both residential and commercial, we’ve developed risk metrics targeted at movements related to the most critical geohazards that may affect them, allowing for rapid and up-to-date evaluation of property risk and exposure. All our data is viewable through our custom-made web-portal, allowing non-specialist users to perform geospatial analysis of our data over their area of interest. In this contribution, we present examples of our data showing movements affecting infrastructure and assets, in particular landslides affecting railroads, movement along fault lines affecting highways, seasonal shrink-swell affecting buildings, and mining related movements affecting a variety of assets . We also demonstrate how our data is used to provide actionable metrics to our clients and partners. The continued uptake of large-scale InSAR ground movement datasets, in light of growing and changing geohazards related to climate change, relies on reducing the threshold of use for non-specialists. By pre-processing the data and distilling the most critical information into actionable maps and metrics, SatSense aims to do just that. Our data product provides an up-to-date overview of ground movements due to geohazards on the asset of interest, alongside providing crucial context by monitoring the surrounding area. With a time history going back at least a decade, cyclical patterns can be identified, leading to an improved understanding of long terms risks caused by various ground deforming geohazards. With the arrival of additional scientific, operational and commercial SAR satellites covering a wide range of resolutions and radar frequencies, we expect the roll-out of large-scale ground movement geohazard products to increase in scope and coverage.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: InSAR for Geotechnical Analysis, Applications and Geohazards

Authors: Regula Frauenfelder, Malte Vöge, Georgia Karadimou, Dyre Dammann
Affiliations: Norwegian Geotechnical Insitute, Kongsberg Satellite Services
Interferometric Synthetic Aperture Radar (InSAR) has become an indispensable tool for assessing ground stability and geohazards, offering unparalleled spatial coverage. It is a technique that enables observations of ground motion from space with millimeter-scale precision and assessments of ground stability and risk. Inio is a service that highlights the advantages of InSAR for such applications and presents the recent research by the Norwegian Geotechnical Institute (NGI) and the collaborative partnership with Kongsberg Satellite Services (KSAT), aiming to enhance the value of InSAR for geotechnical applications and geohazards. An increasing number of SAR satellites, both public and commercial, provide a constantly growing archive of data for InSAR analyses, which allows to carry out detailed deformation analyses on almost every place on earth. Providing data for many years into the past, in some places going back to the beginning of the 1990s, this provides a unique opportunity to map structural integrity of important infrastructure over long periods of time. The large footprint of the SAR images enables the tracking of spatial variations in ground movement over many kilometers. NGI has been at the forefront of applying advanced InSAR techniques, such as Small Baseline Subset (SBAS) and Persistent Scatterers (PS) interferometry, to study geohazards and geotechnical monitoring. KSAT is the world’s leading provider of Ground Network Services, with a uniquely positioned global ground station network, provided rapid access to all SAR and optical data required for this kind of analysis. The research of NGI and KSAT combined, focuses on integrating InSAR data with other geospatial information to improve accuracy and reliability. This multidisciplinary approach not only aids in understanding ground dynamics but also supports the development of effective mitigation measures. Transportation, construction, energy, mining and natural hazards, are some of the diverse sectors InSAR and this service have applicability. The case examples we have identified need for further study, include monitoring infrastructure like bridges, roads, railroads, tunnelling, hydro dams, mine tailings slopes and landslides. All of these, can be monitored to detect geohazards such us creep and subsidence, to track ground movements, evaluate project impacts and help monitor risk to operations as well as surrounding areas. Subsiding cities, or geological and natural hazards affecting population, can be continuously monitored and assess risks to the built environment. Inio uses InSAR to monitor geotechnical conditions and unstable ground associated with a construction and infrastructure development as well as the monitoring throughout the energy sector in support of EOR and CCUS. By detecting early signs of ground movement, InSAR can enable timely interventions, ensuring the stability and safety of construction projects. This proactive monitoring helps mitigate risks and prevents potential failures, thereby safeguarding investments and enhancing the longevity of operations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: PSI and SBAS Based InSAR Processing of Sentinel-1 Time Series for Assessing Surface Velocity Patterns and Precursor Land Subsidence due to Collapse of Underground Cavities in the State of Qatar

Authors: Dr. Charalampos Kontoes, Stavroula Alatza, Martha Kokkalidou, PhD(c) NIkolaos Stasinos, Katerina-Argyri Paroni, Prof Constantinos Loupasakis, Dr Katerina Kavoura, Dimitris Vallianatos, Dorothea Aifantopoulou, Ismat Sabri, Yassir Elhassan, Ali Feraish Al-Salem, Ali Anmashhadi, Elalim Abdelbaqi Ahmed, Umi Salmah Abdul Samad
Affiliations: National Observatory of Athens, Institute for Astronomy and Astrophysics, Space Applications and Remote Sensing, Center BEYOND for EO Research and Satellite Remote Sensing, National Technical University of Athens, School of Mining and Metallurgical Engineering, Laboratory of Engineering Geology and Hydrogeology, EDGE in Earth Observation Sciences, STS Survey Technologies, Ministry of Municipality
Sinkholes constitute the main geohazard that affects geotechnical and infrastructure projects in Qatar. These phenomena are strongly linked to long term ground deformation observed on the surface of Qatar due to land subsidence. Usually, most of them are related on the differential dissolution of gypsum interbedded the lower subsurface geological layers. The mild relief and the Land Use coverage of the country enables the accurate detection of deforming areas through multi-temporal SAR interferometry methods such as the PSI and SBAS techniques, that are robust methods with mm/year level of accuracy. Deformation phenomena in Qatar and specifically in the urban periphery of Doha and its surroundings, are detected and monitored through an integrated research approach combining continuous over the years InSAR processing with field investigations. Using Sentinel-1 data from 2016 to 2024, Line of Sight displacements are estimated for the entire State of Qatar, at selected scatterers that have a point-like scattering behavior in time. The deformation histories at the location of these scatterers are also produced, providing evidence for potential non-linear deformations occurring in the country. Sentinel-1 images of both descending and ascending tracks are employed (> 850 images employed for the study). Observations of both satellite passes enable the decomposition of LOS displacements to Up-Down and East-West motion components. The creation of the InSAR stack was performed with the open-source platform ISCE and PSI analysis with StaMPS. Both software is customized accordingly to increase processing capacity through parallelization algorithms techniques developed at the Center for Earth Observation Research and Satellite Remote Sensing BEYOND of the National Observatory of Athens. All SAR displacements layers will be securely hosted within NOA/BEYOND’s ArcGIS Enterprise installed in our premises, a platform designed for enterprise-level geospatial data management. ArcGIS Enterprise ensures scalable, secure, and efficient data storage, processing, and management. By utilizing ArcGIS Enterprise as the hosting platform, users benefit from a secure, robust infrastructure for managing all geospatial data. The ArcGIS REST API provides a versatile and powerful means to access and leverage this data for a wide range of applications, from environmental monitoring to urban planning and beyond. The use of the fully automated parallelized processing chain of NOA for InSAR processing, enabled the processing of large volumes of EO data covering the State of Qatar and provided valuable insights on surface deformation phenomena occurring in the country. Negative LOS displacements were identified in a broader area around Doha. Deforming areas that were identified from the PSI and SBAS (on selected zones) InSAR analysis on Sentinel-1 data, were validated by field investigations. During the field visits, several deforming sites were identified in between as well as at the perimeter of the Dahl Al Hamam and Dahl Duhail sinkholes, as well as at several other sites in the wider Doha region. Thus, a detailed analysis of the deforming sites and the LOS displacements identified by both ascending and descending Sentinel-1 satellite passes is presented. The implementation of InSAR techniques on Sentinel-1 data, for monitoring surface displacements in Qatar, enabled a national scale deformation mapping with millimeter accuracy, over an eight-year time period from 2016 to 2024. Finally, the field investigations performed in the identified deforming areas provided validation of the observed InSAR deformation phenomena in Qatar and additional information about the deformation driving mechanisms. To mitigate risk and enhance preparedness, continuous monitoring of ground deformation phenomena in Qatar is proposed with the use of SAR data and validated by ground-truth investigations, for a more accurate deformation mapping.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Open-Access Global Ground Deformation Dataset for Tectonic High-Strain Zones Based on Sentinel-1 Interferometry

Authors: Giorgio Gomba
Affiliations: German Aerospace Center (DLR)
The SAR4Tectonics project will deliver openly accessible, global measurements of ground deformation in tectonically active high-strain zones, providing geoscientists with ready-to-use velocity maps and time series. This reduces the need for researchers to process SAR data themselves, allowing to focus on the deformation analysis. Ground deformation in high-strain areas is important for understanding geological processes in tectonically active regions. These regions, situated near tectonic plate boundaries, are characterized by significant ground deformation and elevated seismic activity, making them critical for geoscientific research and seismic risks assessment. Traditional ground-based methods like GNSS provide limited spatial coverage, often leaving data gaps. InSAR techniques, especially PS/DS analysis, overcomes this with millimeter-scale displacement measurements across large areas and high temporal resolution. In the project, we processed 6.5 years of Sentinel-1 SAR data, focusing on areas where the second invariant of strain exceeds 3 nanostrain per year. Using the terrabyte high-performance data analytics platform (a collaboration between German Aerospace Center DLR and the Leibniz Supercomputing Centre LRZ), we applied the PS/DS technique with the IWAP processor to produce high-accuracy results. Error corrections included ionospheric mitigation via CODE total electron content maps, tropospheric delay correction using ECMWF reanalysis, and solid earth tide modeling. Vegetation and soil moisture impacts are minimized through a full covariance matrix approach, and GNSS data ensured precise calibration. The SAR data processing is complete, and we are finalizing the publication of the results as an open-access dataset aiming to make comprehensive ground deformation data readily accessible for scientific discovery and practical applications. By providing globally consistent, high-quality deformation products as open-access resources, this initiative hopefully reduces the burden of SAR data processing for geoscientists, enabling them to focus on analyzing Earth's dynamic processes. Additionally, it provides a baseline reference for future studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detecting Sinkholes and Land Surface Movements in Post-Mining Regions Utilizing Multi-Source Remote Sensing Data

Authors: Sebastian Walczak, PhD Wojciech Witkowski, Dr. Eng. Tomasz Stoch, Artur Guzy
Affiliations: AGH University Of Krakow
Post-mining regions are particularly prone to unpredictable geological hazards such as land surface movement and sinkhole formation associated with groundwater rebound, even long after mining activities have ceased. Given the rising number of underground mine closures across many European countries, this issue is highly significant, as both land surface movements and sinkholes pose a direct threat to the safety of infrastructure and residents in these vulnerable areas. To address this concern, remote sensing techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Airborne Laser Scanning (ALS) have become increasingly important tools for monitoring these areas, particularly with the growing availability of open source datasets. In this study, we investigated the potential of integrating open source datasets of European Ground Motion Service (EGMS) InSAR data with ALS data obtained from the Polish National Geoportal to monitor land surface movements and detect sinkholes. We also validated the reliability of these datasets by comparing results with precise geodetic levelling measurements and in-situ observations of sinkhole occurrences. Our study was conducted in the underground hard coal mine “Siersza” located in the city of Trzebinia, Poland. The mine was closed in 2001 after several decades of mining exploitation that resulted in cumulative land subsidence of several meters. Following the closure of the mine, groundwater rebound, land uplift, and sinkholes have been observed. Due to data availability, this study covered the period from 2019 to 2022. The research highlights the complementarity of open-source EGMS InSAR and ALS datasets in monitoring land surface movements and sinkhole occurrences. In the study area both subsidence and uplift were observed, with values ranging from –7 mm to +15 mm per year, respectively. The EGMS InSAR demonstrated a strong correlation with precise levelling, indicating high reliability for monitoring land surface movements utilizing EGMS InSAR data. In contrast, the low vertical accuracy of ALS data, approximately +/- 18 cm, resulted in discrepancies when compared with both precise levelling and EGMS InSAR. For sinkhole detection, we applied several data processing algorithms to ALS data, including M3C2, low-pass filtering, and raster differencing, with the last approach yielding the most reliable results. ALS enabled precise determination of the centre, diameter, and depth of each sinkhole, which was not possible with in-situ observation alone. However, the ALS allowed for the detection of about 59% of sinkholes identified through field surveys. Due to the uncertainty of the time data from field surveys and the possibility of the sinkhole being buried during the study period, it was not possible to provide a 100% detection rate. Our analysis revealed that EGMS InSAR-retrieved land surface movements close to sinkholes display higher standard deviation, suggesting greater variability in land surface movement within approximately 400 meters from the sinkholes. Interestingly, land surface movement was less pronounced near sinkholes and increased with distance from them. Specifically, given that the study area is currently influenced by dominating land uplift trend, areas closer to sinkholes might experience smaller uplift due to the overlapping effect of local subsidence associated with the forming sinkholes. In general, EGMS InSAR can be effectively used for monitoring large-scale land surface movements of relatively small magnitudes in post-mining environments. However, it is not efficient for inventorying the spatial size of sinkholes. On the other hand, the findings emphasise the need for higher ALS data vertical accuracy when monitoring small-scale land surface movements. Despite this limitation, ALS proves effective in capturing larger land surface changes related to sinkhole formation. Our study utilized multiple remote sensing datasets to improve understanding of ongoing land surface movements in post-mining areas prone to geological hazards. These insights have valuable implications for future research and practical applications in engineering and risk management. Keywords: land surface movement, InSAR, ALS, sinkhole, post-mining regions
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Understanding the Complexity of Large Alpine Slope Instabilities at Mt. Mater (Valle Spluga, Italy) Using Multiplatform and Multifrequency InSAR

Authors: Federico Agliardi, Tazio Strozzi, Cristina Reyes-Carmona, Katy Burrows, Rafael Caduff, Othmar Frey, Philipp Bernhard, Urs Wegmüller, Andrea Manconi, Christian Ambrosi, Alessandro De Pedrini
Affiliations: University of Milano-Bicocca, Department of Earth and Environmental Sciences, Gamma Remote Sensing, Swiss Federal Institute for Forest, Snow and Landscape Research, University of Applied Sciences and Arts of Southern Switzerland
Large rock slope instabilities are widespread in alpine environments. They influence the long-term topographic and hydrological characteristics of alpine slopes and threaten lives, settlements and infrastructures. These phenomena are characterised by different mechanisms associated in space and time, resulting in heterogeneous displacement patterns, with nested sectors characterized by differential movements and shallow fast instabilities superimposed to deep slow movements. These landslides usually creep over hundreds or thousands of years and can eventually undergo a “slow to fast” transition towards catastrophic collapse. Recognizing this transition is mandatory to develop a capacity to deal with related risks. Satellite SAR interferometry has become a major tool to map and monitor surface deformations associated with large landslides, thanks to the extensive and temporally continuous coverage provided by Sentinel 1 missions. However, the application of C-band InSAR is limited in areas with vegetation, significant atmospheric disturbances typical of alpine environments, and relatively high displacement rates. The interpretation of C-band products is further complicated by the heterogeneity of displacement patterns and rates of large landslides. A sound characterization of their kinematics and activity thus requires integrating multi-platform and multi-frequency InSAR data, ground-based monitoring data, and strong field geomorphological constraints. In this perspective, a significant contribution of upcoming L-band SAR missions is expected. Thus, a systematic assessment of the potential of L-band data in large landslide studies is required to prepare future end-users to real-world applications. In the framework of the ESA MODULATE project (MOnitoring lanDslides with mUltiplatform L-Band rAdar Techniques) we studied the Mt.Mater rock slope instability in Valle Spluga (Lombardia, Italy). It affects a 1300m high slope over an area of 3 km2, impending on the Madesimo village and ski resort. Slope instability was first recognised in early PS-InSAR datasets, then monitored since 2011 by ARPA Lombardia through periodical Ku-band GB-InSAR and GNSS measurements. Field geomorphological investigations and C-band InSAR products at different temporal baselines (24 days to 1 year; Crippa et al., 2020) allowed identifying the processes underlying measured movements and provide a conceptual model for their interpretation. An active deep-seated gravitational slope deformation (DSGSD) affects the entire slope, with a translational global kinematics and displacement rates <3 cm/yr. The DSGSD hosts two nested large landslides with compound movements and seasonally variable rates of 3-6 cm/yr. In the upper part, scree and periglacial deposits move at faster rates exceeding 10 cm/yr. The spatial heterogeneity and wide range of displacement rates of Mt. Mater limit the capability of Sentinel 1 data to: a) identify differential displacement in vegetated or poorly coherent areas; b) reconstruct temporal trends associated to potential “slow to fast” evolution. Thus, we processed SAR images provided by spaceborne L-band sensors (ALOS-2 PALSAR-2, 2015-2023; and SAOCOM 2021-2024) and the carborne GAMMA L-Band SAR instrument (October 2024) to obtained ad hoc DInSAR and PSI (SBAS) products. We systematically compared L-band products (both phase and displacements) to the datasets derived from Sentinel 1 (DInSAR, 2016-2024; PSI, 2015-2020) and Ku-Band GB-InSAR (2011-2024), constrained by GNSS and field data. Our L-band products resulted in highly coherent interferograms over temporal baselines up to three years. This allowed obtaining extremely dense PSI datasets, providing an unprecedented figure of the boundaries and displacement rate of faster nester sectors, even in vegetated areas at slope toe. While confirming the interpretation of the landslide kinematics and activity provided by previous studies, new L-band data, systematically processed over the last ten years, could outline non-linear displacement trends relevant to hazard assessment. Finally, the availability of multi-LOS (spaceborne and terrestrial) L-band data supports the decomposition of displacement vectors toward a more effective assessment of 3D kinematics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Advancing Ground Motion Monitoring with the IRIDE Nimbus Constellation: Development of Ground Motion Service Segment domain.

Authors: Enrico Ciraci, Francesco Valente, Vincenzo Massimi, Emanuela Valerio, Emanuele Passera
Affiliations: e-Geos S.P.A., TRE ALTAMIRA S.R.L., Planetek Italia S.R.L., NHAZCA S.R.L.
The IRIDE Program, a collaboration between the Italian Government, the European Space Agency, and the Italian Space Agency, represents a transformative effort to advance Earth Observation through innovative upstream, downstream, and service segment development. This presentation focuses on the Ground Motion domain within the IRIDE Service Segment, designed to leverage radar observations from current missions (e.g., Sentinel-1, COSMO-SkyMed, SAOCOM) and the forthcoming IRIDE NIMBUS constellation. This integration will enable national-scale ground motion monitoring with unprecedented spatial and temporal resolution. The Ground Motion Service Segment addresses critical challenges in ground deformation analysis, including monitoring infrastructure stability, landslides, and subsidence. High-precision, high-frequency data products are being developed to enhance geospatial intelligence for public safety, urban planning, and environmental management. In this context, we present the innovative products that will be delivered within the program to support the analysis of ground motion phenomena. These include, for example, a novel algorithm for mapping areas of active deformation using multi-temporal interferometric synthetic aperture radar (InSAR) data, leveraging a density-based clustering approach to automatically identify regions exhibiting significant displacement trends and consistent temporal variations. We will present the progress in developing this service, highlighting critical innovations in radar data processing, integration, and scalability. These include workflows optimized for IRIDE NIMBUS SAR payloads and advanced analytical tools designed to deliver actionable insights to end-users, including automated, big-data-oriented techniques to handle large-scale InSAR datasets, demonstrating a critical step toward developing nationwide monitoring systems for detecting and analyzing ongoing surface deformation. The IRIDE Program underscores a commitment to harnessing cutting-edge technology for societal benefit. By advancing the Ground Motion domain, IRIDE demonstrates its potential to revolutionize Earth Observation, fostering resilient and sustainable communities worldwide.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: RAINFALL, ANTHROPOGENIC ACTIVITY OR THE CHAMOLI FLOOD? WHAT TRIGGERED THE REACTIVATION OF THE JOSHIMATH SLOPE (UTTARAKHAND, INDIA): INSIGHTS FROM MULTI-SENSOR SATELLITE OBSERVATIONS

Authors: Floriane Provost, Bryan Raimbault, Pascal Lacroix, Simon Gascoin, Bastien Wirtz, Kristen Cook, Michael Foumelis, Jean-Philippe Malet
Affiliations: EOST - École et Observatoire des Sciences de la Terre, CNRS / Université de Strasbourg, ITES - Institut Terre et Environnement de Strasbourg, CNRS / Université de Strasbourg, Institut des Sciences de la Terre, Université́ Grenoble Alpes, Université́ Savoie Mont Blanc, CNRS,IRD, Université Gustave Eiffel, France, Centre d’Etudes Spatiales de la Biosphère, Université Toulouse 3, CNES, CNRS, IRD, INRAE, France, Department of Physical and Environmental Geography, School of Geology, Aristotle University of Thessaloniki
It has long been recognized that river erosion at the toe of hillslopes can cause slope failures, triggering new or reactivating existing landslides. The impact of extreme flooding on slope stability has been studied only for some specific case-studies during the events [1,2] while flood impact on the longer-term lateral erosion and its impact on slope stability is rarely considered [3,4]. This question is however of importance for our understanding of the impacts of flood events, as flood-driven erosion and slope failures are generally not considered in analyses of flood hazards, leaving potentially vulnerable populations unaware of their risk. Recently, in the Joshimath area, a landslide connected to the Rishiganga River in Northern India has attracted a lot of attention [5,6]. Indeed, the slope faced major signs of slope instability in early 2022, about a year after an extreme flood event occurred in the region following the Chamoli rock and ice avalanche in February 2021 [7]. The triggering mechanisms invoked to explain this reactivation are precipitation rates [3] and/or anthropogenic activities [4]. However, the role of the Chamoli rock and ice avalanche in this reactivation has not been investigated. In this study, we performed a regional analysis of the slope movements along, and in the vicinity of the Rishiganga River in order to investigate whether or not the Chamoli flood had an impact on the landslides located at the banks of the river. We processed multi-sensor satellite image time series (Sentinel-1, Sentinel-2) using SAR interferometry (InSAR) and offset tracking techniques to measure both slow and fast displacement rates. Further, an archive of Pléiades images was also used to construct time series of Digital Surface Models (DSMs) and estimate the erosion and deposition rates before and after the Chamoli 2021 avalanche. In total, we detected about 20 active landslides along, and in the vicinity, of the Rishiganga river. Some of them are located at the bank of the river, others in neighboring catchments that were not affected by the Chamoli flooding. For each active landslide, we analyzed the displacement time series and detected the occurrence of an acceleration onset and its date using Principal Component Analysis (PCA). First, we show that the majority of the landslides are characterised by a constant velocity from 2016 to 2024 with no significant acceleration. In the Joshimath slope and neighbouring eastern slopes, we detect transient accelerations in fall 2021 and in winter 2022-2023. Pléiades time series confirm the onset of these acceleration and show that the magnitude are even stronger (> 1m.year) at the to a of these slopes. These results likely indicate that the landslides directly connected to the Rishiganga River were reactivated after 2021 while the other landslides in the region did not undergo significant reactivation. However, we also detect several reactivations of fast moving landslides (> 1 m.year-1) in the Semkora Nala catchment, south of the Joshimath town, in 2021. This catchment is not connected to the Rishiganga River and was not impacted by the Chamoli flood. We discuss the possible mechanisms (e.g. water seepage at the toe, debuttressing due to river-bank erosion) that led to the Joshimath slope reactivation. We show that regional studies with multi-sensor and multi-processing approaches are key to capture the complete pattern of ground motion in mountainous areas.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: InSAR.Hungary: the Hungarian InSAR Ground Motion Service and Application

Authors: Bálint Magyar, Ambrus Kenyeres, István Hajdu
Affiliations: Lechner Nonprofit Ltd. - Satellite Geodetic Observatory, Budapest University of Technology, Faculty of Civil Engineering, Department of Geodesy and Surveying
Numerous nationwide, even continental scale InSAR ground-motion monitoring services became operational and published online thanks to the advances in wide-area InSAR processing (WAP) techniques. Inline with these developments we also present InSAR.Hungary, the Hungarian InSAR Ground Motion Service and Application developed in the LTK Satellite Geodetic Observatory (SGO). InSAR.Hungary provides nationwide ground motion solution and being published as an interactive web-based tool serving the clients. The product levels of the service are harmonized to the European Ground Motion Service (EGMS), complementing it with a country-specific and focused solution. We highlight here the mitigation of the atmospheric phase screen (APS) and the strategy of the applied phase unwrapping techniques in-depth, as well as demonstrate the characteristics of the specific production workflows. Moreover, we display the client-oriented features and tools of the service and discuss the different features of the service related applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: DInSAR Time Series Uncertainty Quantification

Authors: Alessandro Parizzi, Alfio Fumagalli, Alessio Rucci
Affiliations: TRE-Altamira
DInSAR time series have become a reliable tool for monitoring ground deformation over large areas without the need for in-situ instrumentation. While operational service providers leverage extensive SAR constellations to derive deformation time series, the underlying processing is complex and often lacks rigorous uncertainty quantification. Traditional quality indicators, such as coherence and RMSE, provide limited insight into the accuracy of individual time series samples. Two main drawbacks are evident. • It is not possible to have information on the accuracy of the single sample. • The indicators rely on the local noisiness of the sample to assess the quality. Exploiting the local noisiness as proxy is indeed correct but in general not complete since it does not account for the systematic errors present on the data introduced for example by the filtering of the atmospheric components. This work aims to address these limitations by developing an error propagation model that considers various noise sources, including clutter, atmospheric effects, and topographic phase compensation, having as a final goal to: • Improve the control on the product’s final performance • provide add-on information that facilitate the measurements interpretation for the final user For the clutter noise both the point targets (PS) and the distributed targets (DS) are considered and treated accordingly. Using the relation between amplitude and phase variances for the first and bootstrapping techniques for the latter. Since the final time series need to be corrected for the effects of topography of interferometric phase the accuracy of such correction is computed and accounted. A critical aspect of the model is the effect related the filtering of the atmospheric additive delay from the interferometric phase. The impact of such filtering on the uncertainties has been analytically derived and used for the computation of the final error. This component is important since it introduces a highly covariant error component that cannot be observed in the time series noisiness. Its prominence depends on the dimension of the observed AOI since the effect of the atmospheric delay increases moving apart from the spatial reference. This means that for small AOIs clutter/decorrelation effects are dominant but on large deformation sites the atmosphere becomes the main performance driver in the error budget. This aspect is of particular importance if we consider the increasing demand of nation-wide DInSAR analysis. The framework has been finally extended to the 2D decomposed timeseries retrieved both from ascending and descending data. To achieve this, the error sources are tracked through the processing steps that resample the data onto the same spatial and temporal grid, ultimately projecting them onto the horizontal and vertical directions relative to the Earth's surface. The approach has been implemented and tested on a large set of different test sites both for the single LoS (line of sight) and for the 2D ones. The results look promising highlighting the presence of noisy image as well as the drift generated by the atmospheric filtering uncertainty. Future validation against independent measurements (e.g., levelling, GNSS) will further solidify the approach.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Landslides detection through remote sensing and ground truth investigations in Cyprus.

Authors: Dr Stavroula Alatza, Constantinos Loupasakis, Kyriaki Fotiou, Alexis Apostolakis, Marios Tzouvaras, Kyriacos Themistocleous, Charalampos Kontoes, Chris Danezis, Diofantos G. Hadjimitsis
Affiliations: National Observatory of Athens, Operational Unit BEYOND Centre for Earth Observation Research and Satellite Remote Sensing IAASARS/NOA, Laboratory of Engineering Geology and Hydrogeology, School of Mining and Metallurgical Engineering, National Technical University of Athens, ERATOSTHENES Centre of Excellence, Department of Civil Engineering and Geomatics, University of Technology
Cyprus is located at the boundary zone of the African, Eurasian, and Arabian tectonic plates and is subject to significant geological activity. This unique geotectonic setting, combined with its mountainous terrain and climatic conditions, makes the island susceptible to various geohazards, including landslides. These events pose serious threats to human lives and infrastructure, especially in regions with steep slopes and unstable geological formations. In the center of Cyprus, Troodos Mountain, is the most prominent geological feature. In the Troodos Mountains, rockfalls and slides commonly occur. To investigate landslide phenomena in Cyprus and specifically in the broader region around Troodos Mountains, InSAR time-series analysis was performed with the use of Sentinel-1 images from 2016 to 2021. Persistent Scatterer Interferometry (PSI) was implemented on Sentinel-1 images of ascending satellite pass. InSAR processing was performed with the use of the fully automated parallelized processing chain of NOA, the so-called P-PSI, which enabled the processing of large volumes of EO data. Negative Line of Sight displacements due to landslide activity are detected in Pedoulas village, with a maximum value of -10mm/y. To further analyze surface displacements in the area, vertical displacements were estimated. The observed SAR deformation was validated by ground truth investigations. The ground truth inspections verified the remote sensing derived deformation phenomena in Pedoulas village and provided insights about the driving mechanism, which is extreme precipitation events. By analyzing ERA-5 precipitation data along with a time-series of vertical displacements, a correlation between extreme precipitation events and shifts in deformation trends is identified. Landslide movements were observed to accelerate during spring and summer, while the phenomena continue with a regular rate during winter. The present study demonstrated the efficiency of the applied multidisciplinary methodology for the investigation of landslide phenomena in Cyprus. The use of remote sensing techniques on Sentinel-1 data, enable the identification of landslides in affected areas. On the other hand ground-truth inspections provide valuable insights on the driving mechanisms of landslide phenomena. The proposed strategy establishes a strong foundation for risk mitigation and mitigation in geologically active regions such as Cyprus.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Earth Observation for Subsurface Risk Mitigation: InSAR Diagnostics of Wellbore Failures in the Permian Basin.

Authors: Mark Bemelmans, Tobiasz Bator, Charlie Waltman, Pieter Bas Leezenberg
Affiliations: SkyGeo
Salt Water Disposal (SWD) is known to cause human-induced pore pressure increases in the subsurface. Improper management of the subsurface pressure has an intrinsic risk that can lead to injection fluids, oil, and saltwater, leaking out of the injection zone and, in some cases, onto the surface. This poses immediate problems for operating sustainability. We use Interferometric Synthetic Aperture Radar (InSAR) to detect the surface footprint associated with (or displacement caused by) these human-induced pressure changes with millimeter precision. The Permian basin has experienced several water-to-surface events as a result of these dynamic pressure changes. Lake Boehmer, for example, was formed due to the uncontrolled spilling of water from an abandoned well. In recent years, there have been two well blowouts in the Permian Basin: The Crane County geyser, and the Toyah geyser, as well as, a well leak near Barstow. The Crane County geyser occurred in Crane County, Texas, where saltwater emerged from an old well in December of 2021. This blowout was preceded by surface uplift to the north of the well, travelling south for several months before finding a weak spot (i.e. the borehole) to burst out into the surface. We attribute this blowout to the increase in subsurface pressure as a result of SWD several kilometers to the north. Following this blowout, the well was shut in on January 29, 2022. However, immediately after the leaking Crane County wellbore was shut in, the feeder channel started building up pressure leading to further uplift in the area surrounding the borehole. In December 2023, due to the pressure build-up, a crevice formed in the vicinity of the Crane County well and started leaking salt water. This leakage caused a pressure drop and subsidence. Since January 2024 the subsidence has levelled off and continued monitoring should reveal if this development indicates an equilibrium or another pressure build-up. The Toyah geyser occurred on October 2, 2024, from a dry production well drilled in 1961, close to Toyah in Reeves County. This old well extends to 11000 feet but is not cast beyond 3974 feet. The Delaware Mountain Group Formation, used for shallow SWD in the Permian Basin, extends from 3800 feet to 6500 feet deep at this location. This blowout is not associated with a precursory surface displacement signal observable with InSAR. Therefore, we suggest that the Toyah geyser was not caused by a build-up of pressure due to SWD in the area, but was instead the result of a failure of the metal casing between 3800 and 6500 feet, the formation used for SWD. So, unlike the Crane County geyser, shutting in and cementing the Toyah geyser well will be an effective long-term solution to stop the leakage in this area. The leak in Barstow occurred on September 2, 2024, and, like the Toyah geyser, was not preceded by InSAR-observable surface uplift. Instead, the area surrounding the leak experienced subsidence at 2.5 cm/yr. Following the leak, the subsidence rate increased to 20 cm/yr. This subsidence is likely caused by the drop in pressure during the leak. We suggest that the Barstow leak was caused by a material failure at depth, resulting in salt water from the Delaware Mountain Group reaching the surface. We are monitoring the subsidence for a reduction in the subsidence rate and a return to equilibrium. Our targeted InSAR analysis of these three events proved effective in gaining insight into the underlying mechanisms responsible for the salt-water geysers in the Permian Basin. This is essential information for formulating mitigation strategies and promoting sustainable operating practices in the Permian Basin. Through careful monitoring of all wells in this area, we help manage operational risks by issuing warnings, and determining if a build-up of subsurface pressure is responsible for potential future leaks and blowouts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Identifying Triggered/Accelerated Deformation Areas from Early 2023 Extreme Weather Events in Auckland (NZ) using InSAR Advanced Analytics

Authors: Sebastián Amherdt, Miquel Camafort, Núria Devanthéry, David Albiol, Blanca Payas, Eric Audigé, Ross Roberts
Affiliations: Sixense Satellite, Sixense Oceania, Auckland Council
In early 2023, Auckland (New Zealand) was struck by two extraordinary weather events that caused billions of dollars in damage and claimed multiple lives. The first event, on January 27, brought widespread flooding and numerous landslides, marking it as New Zealand’s costliest non-earthquake disaster. Just two weeks later, Cyclone Gabrielle struck on February 14, breaking that record with even greater devastation. To support Auckland Council in identifying slope movements potentially triggered/accelerated by these catastrophic events, an InSAR analysis was conducted, combined with advanced data analytics, across the greater Auckland region. The InSAR processing covered the period from May 2022 to June 2023, utilizing mid-resolution SAR images acquired by the Sentinel-1 satellite (C-band). A total of 36 ascending and 35 descending images were analyzed, enabling a decomposition to extract the true vertical (up-down) and horizontal (east-west) motion components. This analysis produced over 1M measurement points in both Line-of-Sight (LOS) datasets and more than 800k points in the decomposed results. Advanced analytics were then applied to the decomposed datasets to detect clusters of deformation acceleration associated with the extreme rainfall events. The methodology consisted of two steps. First, a time-series segmentation and linear regression analysis were performed to identify points exhibiting acceleration after the rainfall events. Then, an active deformation areas (ADA) algorithm was applied to group points with similar deformation patterns and spatial proximity. This analysis identified slope deformations accelerated by the heavy rainfalls, some of which had previously gone undetected. This work will show the methodology used to identify areas of accelerated/triggered deformation following Auckland’s early 2023 weather extreme events. Examples of the results obtained will be discussed in detail, emphasizing their implications for hazard assessment and risk mitigation in similar contexts. Additionally, insights from this study provide valuable contributions to understanding the impact of extreme weather events on slope stability and offer a framework for future monitoring and analysis efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Austrian ground motion service - just a copy of EGMS?

{tag_str}

Authors: Karlheinz Gutjahr
Affiliations: Joanneum Research
Since 2014, the European Copernicus programme has launched a wide range of Earth Observation (EO) satellites, named Sentinels designed to monitor and forecast the state of the environment on land, sea and in the atmosphere. The ever-increasing amount of acquired data makes Copernicus the largest EO data provider and the third biggest data provider in the world. Experts have already shown the data’s potential in several new or improved applications and products. Still, challenges exist to reach end users with these applications/products, i.e. challenges associated with distributing, managing, and using them in users’ respective operational contexts. In order to mainstream the use of Copernicus data and information services for public administration, the national funded project INTERFACE has been set up in autumn 2022. Since then the project consortium has been focussing on user-centric interfaces and data standards with special attention to integrating different data sets and setting up a prototype system that allows the systematic generation of higher level information products. One information layer within INTERFACE is the so-called Austrian ground motion service that is an interface to the SuLaMoSA prototype workflow and the data as provided by the European Ground Motion Service (EGMS). In this paper I will focus on the second aspect and explain the enhancements with respect to a pure copy of the EGMS data, discuss some findings for Austria and give some recommendations to further improve the usability of the EGMS data. The process of enhancing the EGMS data for inclusion in the INTERFACE STAC catalogue involves both spatial and temporal preprocessing. This includes the merging of the EGMS tiles and spatial slicing of the data to Austria, the temporal alignment and refinement of the EGMS updates to a continuous time series with additional attributes per temporal overlap, as well as the computation of supplementary statistical parameters to enrich the time series dataset. As of October 29, 2024, three updated versions of the EGMS products are available. Version 1 (v1) covers the period from February 2016 to December 2021, version 2 (v2) spans from January 2018 to December 2022, and version 3 (v3) includes the period from January 2019 to December 2023. The EGMS update strategy employs a five-year moving window approach to maximize point density. Analysis of the EGMS ortho product demonstrates that the number of valid points increases from 1.099 million in version 1 (v1) to 1.266 million in version 2 (v2) and 1.230 million in version 3 (v3). This indicates that reducing the observation period from six to five years results in an increase in point density of approximately 115% and 112%, respectively. Conversely, the temporal combination of versions 1 and 2 reduces the number of valid points to 1.036 million, while the combination of all three versions decreases the point count further to 0.998 million. This highlights a reduction of 6% and 9%, respectively, compared to v1, due to the loss of coherent scattering over time. However, this behaviour is not the same for all 18 tiles, which were used to cover the national territory of Austria. There is a clear trend in west east direction. The maximum decrease in point density is found in tile L3_E44N26, covering the area of Tirol. The minimum decrease in point density is found in tile L3_E47N28, roughly covering the area of south west of Vienna. This effect might be explained by the topography and land cover that changes from high alpine and sparsely populated terrain to moderate rolling topography with a highly urbanised environment. The extended temporal overlap of four years facilitates a robust merging of the time series under the valid assumption that the mapped points predominantly exhibit the same deformation regime across all time series. Consequently, only a relative shift of the subsequent time series with respect to the preceding one needs to be determined, resulting in a high degree of redundancy. The standard deviation of the residuals between the shifted time series i+1i+1i+1 and time series iii was 1.6 mm ± 1.87 mm for the merge of version 1 and version 2, and 1.3 mm ± 1.63  for the merge with version 3. Furthermore, the number of outliers per overlap amounted to 8.5±7.1 for the merge of version 1 and version 2, and 10.0±7.5 for the merge with version 3. Finally, to distinguish the predominant deformation regime—seasonal, accelerating, linear, or none—I propose calculating the root mean square error (RMSE) for each of these deformation models. The deformation regime with the minimum RMSE can be identified as the best fit. Subsequently, the reliability of this selection can be assessed based on the significance level of the model parameters. This straightforward decision tree would enable potential users to focus on the deformation pattern of interest and exclude the majority of points that do not conform to this pattern. In summary, geographic trends reveal varying point density reductions, influenced by terrain and land cover. A four-year temporal overlap allowed robust time series merging with low residuals and outlier counts. To identify deformation regimes, calculating the RMSE for seasonal, accelerating, linear, or no deformation models is proposed, enabling user-focused selection of relevant patterns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing DESFA Pipeline Infrastructure Monitoring Through Advanced EO-based Geodetic Imaging

Authors: Michael Foumelis, Dimitra Angelopoulou, Ioannis Gerardis, Elena Papageorgiou, Jose Manuel Delgado Blasco, Paraskevas Frantzeskakis
Affiliations: Aristotle University Of Thessaloniki (AUTh), DESFA O&M Center of Southern Greece
The advancements in geodetic imaging, particularly the evolution and refinement of Interferometric SAR (InSAR) technique along with its thorough validation, has facilitated the acceptance of the technique for operational applications. These advancements are particularly relevant and valuable for the monitoring of large-scale critical engineering infrastructures. In the context of gas pipeline surveying, spaceborne InSAR has become a essential tool for mapping and monitoring surface motion. Pipelines, as massive constructions extending over large areas, pose difficulties for conventional ground-based monitoring methods. Satellite-based InSAR provides an effective solution, enabling the early identification of pipeline segments that require closer inspection and detecting surface motion indicators that could lead to hazardous conditions, allowing for timely information and preventive actions. A prominent example of such an application is the monitoring of the DESFA pipeline network, the principal natural gas transportation system in Greece, and its associated above ground installations. Spanning a total length of approximately 1530 km, the DESFA network passes mainly over flat terrains but also includes segments in more rugged areas and challenging terrain. The pipeline, buried at a shallow depth of approximately 1.20 m, is systematically monitored using ground-based methods, optical satellite imagery, and aerial surveys. These traditional approaches focus primarily on detecting signs of unauthorized human intervention or other ground deformation issues. However, ground-based geodetic techniques like GNSS and leveling or other surface displacement monitoring techniques face limitations in providing coverage of the entire network, making them more suitable for localized areas of known activity or with pronounced deformation signals. A monitoring system based on Copernicus Sentinel-1 mission data and the Persistent Scatterers Interferometry (PSI) technology has been established to address these challenges. The first phase of this activity focused on examining historical surface motion data starting in April 2015, utilizing the entire Sentinel-1 archive, including both ascending and descending orbital tracks. Prior to the actual interferometric processing, a feasibility analysis was conducted to investigate and anticipate areas with potential limitations, enabling the development of a strategically tailored and targeted plan. Based on an automated processing chain, a dedicated workflow was then designed to generate surface motion rates and corresponding time series at sensor resolution. Measurements extend several kilometers on either side of the pipeline, enabling the detection of deformation signals that could potentially propagate and impact the pipeline. To ensure robustness and proper compensation of different error sources, the area of interest was divided into several overlapping tiles processed independently. Post-processing included the geometric decomposition of Line-of-Sight (LoS) motion into vertical and East-West components and the separation of temporal trends from seasonal motion facilitating easier interpretation by expert domain engineers. Several statistical properties of the measurements, quality indicators and obtained uncertainties, geographic locations showing significant motion or demonstrating proper performance, and their corresponding time series visualizations were among the key findings summarized in a structured document format by an automated reporting mechanism. Finally, a layer of human interpretation and validation is incorporated to ensure the reliability, and consistency of measurements and deliverables. The monitoring system is built to offer continuous updates, incorporating new satellite acquisitions at defined time intervals that are suited to the unique surface motion properties of each pipeline segment. This approach supplements existing monitoring activities, offering a solid and scalable solution for enhancing the safety of a critical pipeline infrastructure, while also being applicable across all aspects of the pipeline network lifecycle. It serves as a complementary tool for proactive monitoring or wide-scale emergency inspections, particularly for assessing the impact of natural or weather-related phenomena that are expected to intensify.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: VHR SAR Particle Image Velocimetry Analysis for Lava Effusion Rate Estimates at Kadovar Volcano, Papua New Guinea

Authors: Inga Lammers, Simon Plank, Valerie Graw
Affiliations: Ruhr-University Bochum (RUB), German Aerospace Center (DLR), German Remote Sensing Data Center
The monitoring of volcanic eruptions is of high importance for the safety of the general public and the advancement of scientific knowledge. This study investigates the eruption evolution of Kadovar Volcano, located in Papua New Guinea, between October 2019 and June 2022. The main objective is to analyze lava velocities using the pixel-offset technique by employing the Particle Image Velocimetry (PIV) method during the eruptive period. Additionally, the study aims to characterize the eruption dynamics over the observation period. To achieve this, the research focuses on: (i) applying the PIV method using TerraSAR-X (TSX) Synthetic Aperture Radar (SAR) data to measure lava flow velocities while addressing underestimations associated with SAR imaging geometries; (ii) cross-checking the PIV results through comparisons with optical and thermal datasets; and (iii) developing a theoretical model of the eruption’s evolution based on the findings. Kadovar Volcano is a Holocene stratovolcano located within the Bismarck Archipelago, north of Papua New Guinea. The island has a width of 1.5 km and a height of 365 m and is characterized by steep slopes. The latest eruption began on January 5, 2018, resulting in the evacuation of residents and the activation of the International Charter "Space and Major Disasters" to provide assistance. The necessity for high-resolution TSX SAR imagery arises from the small scale of the lava flow and the methodology employed, which requires the recognition of small structures to detect displacements. The remote location of the volcano and the high probability of frequent cloud cover necessitate the use of data generated remotely and independent of weather conditions. The TSX satellite acquires data images in the X-band in different modes, with varying resolutions. Here, data in the High-Resolution SpotLight (HS) and the StaringSpotLight (ST) imaging modes were employed. A number of preprocessing steps are required for optimal PIV results when using SAR data, including for example co-registration of image pairs, image alignment and image enhancement. In the context of a PIV analysis, the algorithm identifies specific particles, in this case, the structural features of the blocky lava flow, which are then tracked across images. The application of this methodology to volcanic structures therefore depends on the solidification of the crust and the subsequent formation of surface structures. The movement of these structures is identified by subtracting the mean pixel value of all frames from each individual frame. The tracking of randomly selected particles is conducted using open-source kernelized cross-correlation software, thereby enabling the measurement of displacement between consecutive frames. Additionally, TSX amplitude images were visually analyzed to determine any changes in the morphology of the volcano and the size of the lava flow field. The study faces challenges and limitations due to environmental and methodological factors, including limited revisit times, spatial resolution constraints, and SAR-specific issues such as shadowing. To verificate the PIV results and enhance understanding of eruptive dynamics, data from thermal (MODIS and VIIRS) and optical (Sentinel-2 and Landsat-8) satellite sensors were integrated as supporting data. Based on the results of the PIV analysis and the analysis of thermal and optical satellite data, a theoretical model of the eruption’s evolution was derived. The observation period can be divided into three distinct phases of activity. The first phase, from October 2019 to October 2020, marks the return of volcanic activity following the beginning of the eruption in 2018, with the highest velocity values of up to 5.5 m per day, which were observed between March and June 2020. The second phase, from November 2020 to July 2021, was characterized by minimal activity, with velocity values approaching zero and a notable reduction in thermal anomalies. The third phase, from August 2021 to June 2022, depicts a renewed surge in activity levels concentrated in the summit region. The visual interpretation of the TSX amplitude data additionally indicated the formation of a new lava dome at the summit. The study demonstrates the effectiveness of VHR SAR in capturing detailed temporal changes in volcanic activity, thereby providing crucial insights into eruption dynamics in otherwise inaccessible regions. These findings illustrate the potential of the PIV method utilizing SAR imagery for effective remote monitoring of active high-viscosity lava flows.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Hypothesis Testing on a Continental Scale: GPU Based Time Series Classification

Authors: Adriaan van Natijne, Lars Keuris
Affiliations: Keuris Earth Observation
Satellite radar remote sensing has enabled monitoring of natural hazards at a global scale. In the recent years, pre-processed deformation datasets have become available to the general audience, in particular with the arrival of the European Ground Motion Service (EMGS) (Crosetto et al., 2021). Thanks to this recent development, access to InSAR measurements has significantly improved. The EGMS provides both science and society with a quantification of ground motion at a millimeter scale and promotes further analysis of the underlying deformation regimes throughout the European Union, the United Kingdom, Norway and Iceland. To separate (potentially) hazardous regions from their safe surroundings in a uniform and robust manner, one or several models are commonly imposed on time series to extract behavioral statistics such as deformation trends, accelerations and breaks thereof. Unfortunately, the deformation model imposed on time series in the EGMS data is insufficient to describe the deformation of all possible deformation patterns. The singular EGMS model expects a smooth, continuous deformation signal consisting of an offset, velocity, acceleration and periodicity. In contrast to this model, destructive deformation signals can be highly discontinuous and non-smooth. Hence, the EGMS model is currently insufficient for fully capturing all possible deformation signals. However, it is difficult to set-up an appropriate model without accurate, prior knowledge on the nature of the behavior. To accommodate for as many potential geo-hazard related deformation signals, a more flexible model definition is required. Jumps in the deformation time series are of particular interest, because they point out either erratic processing (e.g. phase unwrapping errors) or potentially hazardous physical behavior (e.g. sinkhole precursor). Such a jump is commonly modeled by a step function. In addition, abrupt changes in the linear or seasonal deformation rate are equally relevant. The common approach is to iteratively compose a custom, individual model for each time series. However, due to the large number of time series (several billions) and the large region (Europe) over which these time series are distributed, this comes at a great computational cost. Consequently, promising potential next steps, such as a unified European-wide deformation analysis, cannot realistically take place. As a result, investigative studies based on the EGMS have instead primarily been local. Parallelized GPU processing, however, allows us to impose many models on multiple time series simultaneously rather than iteratively, and can therefore extract model parameters at unprecedented scales. For each time series in the EGMS a few thousand model variations, including step functions, were fitted. Subsequently, the best model for each time series was selected using traditional hypothesis testing. The proposed, parallelized methodology is applied to all time series within the most recent EGMS dataset. Surprisingly, this project was completed on a simple consumer laptop with ~20 million models per second. It was found that the standard EGMS model fit can often be improved with more representative models. Moreover, the prescribed standard deviation of the individual EGMS deformation measurements of 4 mm has been underestimated. The systematic, automated model selection improves the quantification of the overall deformation, and serves as classification of the deformation type. This improves the interpretability of the deformation signal and supports the explanatory power of the EGMS altogether. All this will support a wide range of users to pinpoint overlooked anomalies on a dynamic continent. References: Crosetto, M., Solari, L., Balasis-Levinsen, J., Bateson, L., Casagli, N., Frei, M., Oyen, A., Moldestad, D. A., and Mróz, M.: DEFORMATION MONITORING AT EUROPEAN SCALE: THE COPERNICUS GROUND MOTION SERVICE, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2021, XLIII-B3-2021, 141–146 DOI: 10.5194/isprs-archives-XLIII-B3-2021-141-2021
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Processing SAR Images by PHASE: Persistent Scatterers Highly Automated Suite for Environmental Monitoring

Authors: Roberto Monti, Mirko Reguzzoni, Lorenzo Rossi
Affiliations: Politecnico Di Milano
Planet Earth has experienced significant transformations over time, with profound impacts on both natural ecosystems and human-built environments. Geohazards, arising from natural or anthropogenic processes, can pose significant threats to both the human safety and the territory. These phenomena, such as landslides, earthquakes, volcanic eruptions, and subsidence, can directly impact critical infrastructure and communities, as well as increasing their risk due to exposure to cascade effects. Therefore, monitoring geohazards is crucial for mitigating their potentially dramatic consequences, particularly when this can be done reliably and in near real-time. Synthetic Aperture Radar (SAR) technology offers a powerful solution for observing Earth’s surface under all-weather and all-light conditions, overcoming some limitations of optical instruments. While factors such as revisit time and spatial resolution may pose challenges, SAR’s ability to provide extensive spatial coverage makes it invaluable for monitoring the dynamics of geohazards that unfold over broad areas. SAR-based geospatial processed products are already available, with the European Ground Motion Service (EGMS) being a well-known example. It provides Persistent Scatterers (PS) deformation time series, computed from Sentinel-1 data, all over the continent. These products proved to be reliable for many entry-level applications or analysis where the spatial extant is dominating but lacked in small-scale scalability due to the poor spatial resolution of the level 3 gridded PS. Moreover, addressing the behavior of deformations-connected phenomena often requires proper modelling of the observed signals through statistical methodologies, that are not generally implemented. Therefore, several limitations are inherent to existing products and solutions, making both the processing workflow and results interpretation viable only to SAR experts. To address these challenges, we developed PHASE (Persistent scatterers Highly Automated Suite for Environmental monitoring), a MATLAB-based software suite designed to automatically perform geospatial analyses on data processed using the Persistent Scatterer Interferometry (PSI) technique. The first module of PHASE automatizes the entire PSI analysis, exploiting both SNAP and StaMPS software. The workflow of the second module - deputed to the geospatial processing - begins with deterministic modeling of the deformation time series for each PS using cubic splines. The number of splines is iteratively selected based on the Minimum Description Length (MDL) index, while outliers are removed through a t-student test applied to the residuals. After that, the remaining signal undergoes Fourier analysis to identify the principal signal components, with the corresponding harmonics incorporated into the modeled signal. Then, the empirical covariance function of the residual signal is estimated. A significance analysis is performed on this covariance function; if it is significant , residuals are stochastically modelled by collocation, to squeeze further information from the given time series. Otherwise, the deterministic model is deemed sufficient. This workflow is applied to both 1D and 2D geometries, reflecting the nature of most deformation phenomena. For instance, linear features such as roads and railways fall into the 1D category, while broader and more complex structures like dams, landslides, and volcanic regions are categorized as 2D. A spatial modelling of the displacement in time is also performed. As for 1D geometries, each PS is assigned to a position along the centerline of the monitored element, and displacements at each epoch are spatially interpolated using cubic splines. As for 2D geometries, displacements at each time step are spatially interpolated using bicubic splines. In both cases, the spline interpolation is evaluated on a uniformly spaced 1D or 2D grid. By accommodating both 1D and 2D geometries, PHASE offers a versatile and robust framework for analyzing a wide range of deformation processes. Its ability to process SAR PSI data in a robust and statistically driven way enables researchers and decision-makers to derive critical insights. This methodology can further allow the final users to enhance early warning systems, strengthen infrastructure resilience, and implement measures to protect lives and properties. As a comprehensive tool for geospatial analysis, PHASE represents a significant step forward in the effective monitoring and management of geohazards. Limitations of already available tools and instruments have been overcome through a deep automatization of state-of-the-art statistical methods, culminating in a resulting deformation model easily interpretable even by non-SAR experts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring Linear Infrastructure in Sweden Using InSAR Techniques

Authors: Saeid Aminjafari, Prof Leif Eriksson
Affiliations: Chalmers University of Technology
Sweden’s railway and road networks face increasing risks from ground instability, with severe financial and operational consequences. For example, the Malmbanan railway, a critical route for transporting iron ore, highlights these vulnerabilities with recent derailments that resulted in 15 kilometers of damaged track, halting operations for 76 days and causing daily losses of €10 million for the mining company. These risks extend to other railways and highways e.g. north of Gothenburg city where a large fraction of the transport infrastructure is built on the ground with a high clay content, and Härnösand where derailment has occurred after heavy rain. The stability of these linear structures is essential for Sweden’s transport system and economic resilience. Despite these issues, Sweden lacks sufficient studies applying advanced InSAR techniques to monitor railways and roads. The European Ground Motion Service (EGMS), while valuable, primarily uses Persistent Scatterer Interferometry (PSI), which is limited in vegetated and non-urban areas. These limitations hinder its ability to detect deformation over natural terrains and slopes, where distributed scatterers (DS) dominate. EGMS’s standardized spatial resolution further restricts its utility for localized, high-precision monitoring critical to infrastructure stability. To address these gaps, we adopt a dual approach. Small Baseline Subset (SBAS) InSAR is used to map deformation over non-urban and vegetated areas by leveraging DS, while Persistent Scatterer (PS) data is integrated with SBAS results to create DS+PS maps for enhanced accuracy and spatial density. The project utilizes radar data from satellites Sentinel-1 (medium resolution), and TerraSAR-X (high resolution). Ground-based measurements, including GNSS and leveling data, will validate the InSAR results. For Malmbanan, we have processed 214 Sentinel-1 images from both ascending and descending orbits, generating 330 interferograms to build the 2D deformation network. We employed small single-look (azimuth direction) interferogram processing for high-resolution maps. To mitigate decorrelation, we excluded winter and snowmelt season interferograms, retaining only short temporal baselines. The resulting maps reveal not only deformation along railways and roads but also in their vicinities, offering valuable insights into terrain stability beyond permanent scatterers. This work represents an application of SBAS and DS+PS techniques for Swedish infrastructure monitoring. By addressing limitations in existing systems, our approach supports improved maintenance strategies, reduces risks, and ensures the long-term resilience of critical transport systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.02.05 - POSTER - Peatland

Peatlands cover only 3 per cent of the world’s land, mainly in the boreal and tropical zone, but they store nearly 30% of terrestrial carbon and twice the carbon stored in forests. When drained and damaged they exacerbate climate change, emitting two Gt of CO2 every year, which accounts for almost 6% of all global greenhouse gas emissions. The unprecedented observations collected by the Copernicus Sentinel family and other sensors allow new ways to monitor and manage peatlands. Emphasis will be put on advances in improved mapping and monitoring of intact, degraded and cultivated peatlands for conservation, management and restoration in a global and a specific climate zone (e.g. boreal, temperate, tropical) context. This session will showcase some of the more recent key achievements including methods/algorithms, science and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessment of Surface Dynamics of Peatlands Using Sentinel-1 and Meteorological Data

Authors: Dita Lumban Gaol, Philip Conroy, Simon van Diepen, Dr Freek van Leijen, Prof Ramon Hanssen
Affiliations: Delft University Of Technology
Large parts of the Netherlands consist of low-lying coastal and fluvial wetlands where peat dominates the western and northern regions. Anthropogenic activities, especially water table lowering for agriculture and urban development, have induced land subsidence through peat consolidation, shrinkage, and oxidation, releasing significant CO₂ emissions. Managing and mitigating these effects in peatlands requires a comprehensive understanding of its driving mechanisms and spatio-temporal variations. Advances in space geodetic techniques, particularly interferometric synthetic aperture radar (InSAR) facilitate surface displacement monitoring by analyzing the InSAR phase over time. While time series InSAR analysis effectively estimates displacement, its precision, accuracy, and representativity is compromised by temporal decorrelation, noise, and dynamic soil movement, especially over grasslands on peat soils. Moreover, loss-of-lock events caused by an irrecoverable loss of coherence disrupt the time series and introduce arbitrary unintelligible phase offsets (Conroy et al., 2023a). These events should be identified to prevent misinterpreting phase offsets as displacement. Strategies such as multilooking and using contextual information , e.g. by integrating meteorological data, have improved the reliability of the InSAR displacement estimates (Conroy et al., 2024). However, more experience in the efficacy of InSAR-based surface dynamics assessments is required. Here we estimate and analyze surface motion in a regional peat area south of Delft, the Netherlands, with high spatial variability in soil types using Sentinel-1 data from 2016 to 2022 and the SPAMS (Simple Parameterization for the Motion of Soils) model (Conroy et al., 2023b). SPAMS estimates surface motion parameters based on physical processes and distinguishes between reversible and irreversible subsidence. The model uses precipitation and evapotranspiration data from nearby meteorological stations, assuming that these factors primarily drive soil movement. The analysis focuses on permanent grassland parcels to exclude non-Lagrangian processes related to crop cycles and plowing. Displacement time series were estimated for contextually homogeneous parcel groups, categorized by soil type and groundwater management zone, to address loss-of-lock events assuming that parcels in the same group exhibit similar behavior. The results reveal clear sub-seasonal patterns aligned with precipitation and evapotranspiration cycles. A water surplus from increased precipitation and reduced evapotranspiration causes uplift, while subsidence is due to water deficits driven by elevated evapotranspiration and reduced precipitation. The SPAMS model highlights a direct correlation between irreversible subsidence and climatic conditions. Notably, prolonged dry conditions in 2018 led to the highest estimated levels of subsidence, corresponding to a rainfall deficit and high evapotranspiration compared to other years. Subsidence rates also vary across parcel groups with different soil classes. Analysis of parcel groups with at least 20 members reveals significant subsidence in peat-dominated areas, whereas clay soils generally exhibit lower rates. For parcels with a thin clay cover, the SPAMS parameters indicate a lower evapotranspiration factor, increasing sensitivity to precipitation. In addition, these parcels have a smaller irreversible subsidence factor. The water-retaining properties of heavy clay presumably explains these differences, as clay can clog water, keeping underlying peat saturated and thereby reducing peat consolidation. Mitigating peatland subsidence requires maintaining soil water content, especially during dry periods, to prevent irreversible subsidence while preserving dairy farming operations. Achieving this balance involves using and updating the SPAMS parameters to monitor potential subsidence events, implement water management strategies, and contribute to mitigating peatland degradation. REFERENCES Conroy, P., Van Diepen, S.A., Van Leijen, F.J., Hanssen, R.F., 2023a. Bridging Loss-of-Lock in InSAR Time Series of Distributed Scatterers. IEEE Transactions on Geoscience and Remote Sensing 61. doi:10.1109/TGRS.2023.3329967. Conroy, P., van Diepen, S.A., Hanssen, R.F., 2023b. SPAMS: A new empirical model for soft soil surface displacement based on meteorological input data. Geoderma 440, 116699. doi:10.1016/j.geoderma.2023.116699. Conroy, P., Lumban-Gaol, Y., Van Diepen, S., Van Leijen, F., Hanssen, R.F., 2024. First wide-area Dutch peatland subsidence estimates based on InSAR, in: IGARSS 2024 IEEE International Geoscience and Remote Sensing Symposium, IEEE. pp. 10732–10735. doi:10.1109/IGARSS53475.2024.10642504.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating InSAR and machine learning to estimate subsidence in deforested and drained tropical peatlands in Central Kalimantan, Indonesia

Authors: Deha Agus Umarhadi, Prof. Florian Siegert
Affiliations: Ludwig Maximilian University of Munich, Remote Sensing Solutions GmbH
Tropical peatlands have a crucial role in the global carbon cycle considering their contribution of storing a large amount of soil carbon. Peatlands naturally are in waterlogged condition as the swamp forests maintain the anoxic state. However, the majority has been facing major degradation and drainage due to land conversion for agriculture and logging, including that of located in Indonesia. Once peatlands are drained, carbon dioxide is released in huge quantities due to the bacterial decomposition of the plant biomass. Subsidence of the peat layer occurs as a result of peat consolidation, decomposition, and shrinkage due to desiccation. Interferometric Synthetic Aperture Radar (InSAR) has been widely used to monitor land subsidence from space effectively. However, SBAS-InSAR has a limitation in the continuity in terms of spatial coverage due to decorrelation especially when implemented in the vegetated areas over peatlands. In this study we used SBAS-InSAR and machine learning to capture peat subsidence in a large degraded peatland area in Central Kalimantan, Indonesia. We applied a time-series small baseline subset (SBAS) InSAR analysis using 45 stacks of Sentinel-1 C band (2021-2022). The study area covered Blocks B and C of the ex-Mega Rice Project area. Several regression-based machine learning algorithms were examined, i.e., Support Vector Regression (RFR), Random Forest Regression (RFR), eXtreme Gradient Boosting (XGB), and Light Gradient-Boosting Machine (LightGBM). Predictor maps included land use/land cover (1990, 2000, 2009, 2015, 2020, and 2022), peat depth, distance to peat edge, canal density, distance to canal, year of disturbance, Normalized Burn Ratio (1990, 1995, 2000, 2005, 2010, 2015, 2020, 2022), latest fires, and frequency of fires. Forests and plantation areas were excluded from our analysis as they may contain false estimates from InSAR analysis. Our results showed that SBAS InSAR could identify the peat vertical change covering 79.64% of the study area based on a temporal coherence threshold of 0.25, while the rest was estimated by machine learning model. Based on the model training and testing, RFR outperformed the other methods with an RMSE of 1.170 cm/year and an R2 of 0.740. Overall, the study area subsided with an average of –1.586 cm/year, while an uplift was also observed in the southern part. We collected dGPS data on ground and subsidence pole measurements to validate the remote sensing based subsidence rates.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Global Shocks and Disruptions to Scottish Peatlands – Modelling Carbon-Water Interactions and Feedbacks

Authors: Luisa Orci Fernandez, Mathew Williams, Professor Roxane Andersen, Dr Luke Smallman
Affiliations: University Of Edinburgh, University of the Highlands and Islands
Peatlands occupy 25% of Scotland, and they store more than 50% of the soil carbon in the country. Furthermore, the Flow Country in the north of Scotland is the largest expanse of blanket mire in Europe and the largest single terrestrial carbon store in the UK. Scotland has set ambitious net-zero goals, including tree planting and peatland restoration targets. Understanding the interactions between carbon and water cycles in peatland ecosystems is crucial for achieving Scotland's climate mitigation goals. Recent geopolitical disruption to energy and trade has shifted the focus of Scotland's political attention to food security and agricultural policy, highlighting the multiple demands on land use. Recent extreme weather has highlighted climate change risks to Scotland’s hydrological system. A crucial knowledge gap is how Scotland's hydrological status and peatland C stocks will adjust under climate and land use change. Hydrological risk is rarely assessed in land use policies, particularly the potential impacts of changes in soil moisture on plant growth, food production, and peatland restoration. This information is vital as hydrological feedback on terrestrial ecosystems may determine the success of land use policies. In this study we seek to address these knowledge gaps in peatland hydrology and carbon dynamics by applying the CARDAMOM data assimilation framework to calibrate and validate the DALEC terrestrial ecosystem model. We used CARDAMOM to calibrate and validate the DALEC model at monthly time step using downscaled satellite-based Earth observation of Leaf Area Index (LAI), Above Ground Biomass, and database values of Soil Organic Matter. To evaluate DALEC performance over organic soils we then validated our analysis using independent estimates of Net Ecosystem Exchange of CO2 (NEE) from the Auchencorth Moss ICOS Eddy Covariance tower located in a low land blanket bog in Scotland. Our CARDAMOM calibrated DALEC capture the overall trend of LAI (R2 = 0.67, RMSE = 0.45 m2/m2), with an uncertainty overlap (0-1) between model and assimilated LAI of 0.60. Our analysis was able to reproduce independent NEE observations with moderate deviation with predicted values (R2 = 0.73 and RMSE = 0.48 gC/m 2/day). Our analysis performed less well against water balance components Soil Water Content at 30 cm depth (BIAS = 0.53 m3/m3). In this poster we present our efforts to enhance our water balance analysis including implementing alternate organic soil hydrology equations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing the Wetness of Peatlands in Sweden Using ALOS-2 L-band Data

Authors: Georgina Page, Dr Armando Marino, Professor Peter Hunter, Jens-Arne Subke, Brian Barrett
Affiliations: University Of Stirling, University of Glasgow
1. Introduction Peatlands are an important ecosystem that store large amounts of carbon due to low organic matter decomposition rates caused by their high water table. However, up to 25% of peatlands in Europe are degraded [1], and in need of restoration. The health of bogs can be observed by looking at the water table depth (WTD) and the overall wetness of the bogs to check that the peatland is in a waterlogged condition. Synthetic Aperture Radar (SAR) satellite data can be used to observe the WTD and wetness of the bogs remotely to assess their condition to guide assessment of restoration requirements and validation of outcomes following restoration. The WTD have been observed using C-band SAR and found correlations by using the soil moisture [2] or the backscatter intensity of dual-pol data [3,4]. L-band data however has a longer wavelength compared to C-band and therefore has a greater penetration depth which should be able to be used to monitor the WTD better. Our work is assessing the ability of using L-band data from ALOS-2 to monitor the changes in the wetness of bogs in Sweden. 2. Methods 2.1 Study area Three Swedish bogs (Rösjö Mosse, Blängsmossen and sections of the Sydbillingens Platå, located near Skövde) were observed for this study. These three bogs were chosen due to similarity (raised bogs) to Flanders Moss (near Stirling, Scotland) where previous work has been completed [5]. The wetness of the Swedish bogs was calculated using data from local weather stations that took daily precipitation, temperature and snow depth data over the observed time period (2020-2022) [6,7,8]. Using the daily temperature the potential evapotranspiration was calculated for each day at each bog. Then, the wetness was calculated as a ratio of precipitation and potential evapotranspiration [9]. To get an accurate representation of wetness, the precipitation and potential evapotranspiration was accumulated from several days before the acquisition, to test what the best number of previous of days was (1,3,7,10, and 30 days). 2.2 ALOS-2 Between 8th August 2020 and 16th April 2022, 20 quad-pol ALOS-2 acquisitions of the Swedish bogs were acquired. The images were first filtered to remove dates where the average daily temperature was below 0.5°C, and if there was snow depth recorded on any of the nearby weather stations. The 9 resulting images were calibrated, co-registered and a box car filter (9 x 9) was applied. Different variables were used to identify correlations with the wetness of the bog, looking at the absolute wetness ratio for individual dates, and identifying the changes over subsequent dates. The different parameters calculated were from the Pauli, Cloude-Pottier and Touzi decompositions and the intensities of HH, HV, VH, and VV. Additionally the change matrix of the coherency matrix (T₂ - T₁) for quad-pol data and the change matrix of the covariance matrix for dual-pol data (VV/VH and HH/HV) [10]. From the change matrix the eigenvalue and eigenvectors were calculated which represent the greatest changes and type of scattering being added or removed from the system between the two observed dates. 3. Results The strongest correlations relate to changes in the wetness over time and not absolute values for individual dates. For the strongest relationship, the wetness ratio must be from the accumulation of the previous 30 days. Assessing all the different variables identifies that there is a strong relationship with changes in the surface scattering and the wetness of the bogs, as seen by looking at the RGB Pauli images of the change matrix. However, the strongest relationship is the highest eigenvalue with the change matrix, either for dual-pol (VV/VH) or quad-pol data. For both the dual and quad-pol values the results show that an increase in the eigenvalue correlates with an increase in the change of wetness. For the quad-pol data the R² value was 0.87, while for the dual-pol data R² = 0.88. To further test the results, land map data from Lantmäteriet [11] was used to map forested areas within the bogs and these sections masked to ensure the cover area was just the peatland bogs. These improved the R² values slightly changing the quad-pol value to 0.89 and 0.90 for the dual-pol. Overall, the small difference between R² values show that this methodology could be exploited with dual-pol sensors, if the interest is exclusively on wetness indicators. These results show that L-band has strong potential to look at the wetness of bogs, yet improvement to the results could be obtained with in-situ data of the WTD. This would improve the understanding of the scattering within the bog itself instead of just looking at changes in the climate. The limited data available for ALOS-2 restricts full research capabilities and further work using future NISAR data would allow the chance to look at more peatlands and assess the health of these bogs. 4. Acknowledgements Contains data derived from JAXA ALOS-2 products, all rights reserved, provided by EO-RA3 PI No. ER3A2N039; Contains modified Copernicus Climate Change Service information between 2020-2022 (neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus information or data it contains); This work was supported by the Natural Environment Research Council via an IAPETUS2 PhD studentship held by Georgina Page (grant reference NE/S007431/1). 5. References [1] F. Tanneberger, A. Moen, A. Barthelmes, E. Lewis, L. Miles, A. Sirin, C. Tegetmeyer, and H. Joosten. Mires in Europe—regional diversity, condition and protection. Diversity, 13, 8 2021. ISSN 14242818.doi: 10.3390/D13080381. [2] K. Lees, R. Artz, D. Chandler, T. Aspinall, C. Boulton, J. Buxton, N. Cowie, and T. Lenton. Using remote sensing to assess peatland resilience by estimating soil surface moisture and drought recovery. Science of The Total Environment, 761:143312, 3 2021. ISSN 00489697. doi: 10.1016/j.scitotenv.2020.143312. [3] M. Bechtold, S. Schlaffer, B. Tiemeyer, and G. D. Lannoy. Inferring water table depth dynamics from ENVISAT-ASAR C-band backscatter over a range of peatlands from deeply-drained to natural conditions. Remote Sensing, 10, 4 2018. ISSN 20724292. doi: 10.3390/rs10040536. [4] T. Asmuß, M. Bechtold, and B. Tiemeyer. On the potential of Sentinel-1 for high resolution monitoring of water table dynamics in grasslands on organic soils. Remote Sensing, 11, 2019. ISSN 20724292. doi: 10.3390/rs11141659. [5] B. Sterratt, A. Marino, C. Silva-Perez, G. Page, P. Hunter and J. -A. Subke, "Peatland Water Table Depth Monitoring Using Quad-Pol L-Band Sar," IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 2023, pp. 1469-1472, doi: 10.1109/IGARSS52108.2023.10281800. [6] Tveito, O.E., E.J.Førland, R.Heino, I.Hanssen-Bauer, H.Alexandersson, B.Dahlström, A.Drebs, C.Kern-Hansen, T.Jónsson, E.Vaarby-Laursen and Y.Westman., 2000, Nordic Temperature Maps, DNMI Klima 9/00 KLIMA., Norwegian Meteorological Institute [7] Tveito,O.E., Bjørdal,I., Skjelvåg,A.O., Aune,B. A GIS-based agroecoglogical decision system based on gridded climatology, 2005, Metoeorl.Appl., 12, 57-68, DOI:10.1017/S1350482705001490 [8] Copernicus. Copernicus Climate Change Service, Climate Data Store, (2021): Nordic gridded temperature and precipitation data from 1971 to present derived from in-situ observations. Copernicus Climate Change Service (C3S) Climate Data Store (CDS). https://doi.org/10.24381/cds.e8f4a10c, 2021. [Online, Accessed: 23/04/24]. [9] Bourgault, M-A., Larocque M., and Garneau M., “How do hydrological setting and meterological conditions influence water table depth and fluctuations in ombrotrophic peatlands?” Journal of Hydrology, 2019 [10] A. Marino and I. Hajnsek. A change detector based on an optimization with polarimetric SAR imagery. IEEE Transactions on Geoscience and Remote Sensing, 52:4781–4798, 2014. ISSN 01962892. doi:10.1109/TGRS.2013.2284510. [11] Lantmäteriet, Map 1:50,000 Download, raster, https://www.lantmateriet.se/sv/geodata/vara-produkter/produktlista/karta-150-000-nedladdning-raster/ [Online, Accessed 15/10/24]
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Automated Identification of Potential Peatland Areas in Closed Forest Canopies Through the Detection of Drainage Ditches: A Case Study in Austria

Authors: Oliver Rehberger, Isabella Greimeister-Pfeil, Gerhard Egger, Helmut Kudrnovsky, Gebhard Banko
Affiliations: Environment Agency Austria
Peatlands are vital for climate regulation due to their exceptional carbon storage capacity. By preventing the decomposition of plant matter, peatlands act as natural carbon sinks, mitigating climate change. Moreover, healthy peatlands help regulate water cycles, reducing the risk of floods and play an important role regarding biodiversity. However, when drained for agriculture or other purposes, peatlands release significant amounts of carbon dioxide and methane, contributing to global warming. For Austria, a large number of peatland areas and suspected peatland areas have been known for decades. However, there are still large data gaps especially in dense forests. The manual mapping of unknown or suspected peatland areas is very time-consuming. Satellite-based remote sensing methods could provide support, but they have very limited penetration depths especially over dense canopies. Here we present an alternative approach for the detection of suspected peatland areas in forests, which relies on the use of digital terrain models (DTM). A 1m x 1m DTM is used to detect drainage ditches by applying first a high pass median filter (HPMF) and combining this method with an approach to find local depressions from which trench structures are detected by finding opposite-facing slope. Even if the combination of the two methods improves the trench detection, there are still limitations resulting from the spatial resolution of the DTM, the setting of thresholds for the detection of certain trench widths and the additional detection of terrace structures or trenches along roads. The study is carried out for five regions all over Austria, where the presence of peatlands is highly likely. The resulting maps of approximate ditch length within a given area give a clear indication of where peatlands might be found – even if these are, to a large extent, heavily disturbed by the drainage ditches. Although this method is not able to directly delineate unknown and undisturbed peatland areas, it does give an initial indication of where they might be located and can significantly accelerate mappings. However, it provides a way forward to close gaps in Austria’s greenhouse gas balance, offers opportunities for restoration and serves as valuable information for calculating surface runoff.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating Radar and Hyperspectral Data to Assess Ecological, Hydrological and Mechanical Dynamics of a Temperate Peatland.

Authors: Rachel Walker, Professor David Large, Professor Doreen Boyd
Affiliations: University of Nottingham
Combining different methodologies and datasets can improve understanding of peatland dynamics regarding the interrelationships between the ecology, hydrology and mechanics. This is challenging due to different temporal, spectral and spatial resolutions of the data resources so typically these are researched either in isolation or in combination with one other measure. Here an assessment of peat condition using a combination of InSAR and hyperspectral datasets. Radar data from Sentinel-1 was collected at a high temporal resolution, however, there is limited ground data to analyse in conjunction with it. Hyperspectral data was collected at a range of spatial resolutions and high spectral resolution (EnMAP satellite, piloted airborne, unmanned airborne and ground), enabling depreciation in spatial resolution to be analysed. All data was from the Flow Country, the world’s largest contiguous peatland and is typically cloudy or wet limiting hyperspectral data collection and quality, whereas the radar data was not limited by the weather or costs. Hydrological changes were monitored using InSAR coherence data from Sentinel-1, with relationships between the satellite data and ground data (soil moisture and ground water level) assessed using cross-correlation and Pearson’s coefficient. Ground measurements were collected at an eroded and a near natural site and comparison made between areas at different stages of restoration. Ecology was mapped using machine learning on hyperspectral data, trained using field data in four areas in different conditions (near-natural, restored in 2006, restored in 2015 and eroded). These data were studied in relation to the mechanics of the peatland (bog breathing) which was modelled using InSAR phase data. We found that soil moisture and InSAR coherence demonstrate strong relationships, especially during warmer, drier periods. Spectral data at a satellite level show how peatlands respond to restoration, but lack species/plant functional type details. Initial analysis suggests that the timing of the surface motion peaks, amplitude and rate of swell/shrink are related to the ecology when split into binary class (containing Sphagnum/pools or not) and hydrology in relation to seasonal changes in water loading. Overall, our findings further understanding of the interrelationships between peatland characteristics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Improved Cerrado wetland mapping – seasonal moisture metrics, terrain information and semantic segmentation

Authors: Felix Beer, Leila Maria Garcia Fonseca, Mateus de Souza Miranda, Hugo do Nascimento Bendini, Dr Vu-Dong Pham, Dr Sebastian van der Linden
Affiliations: Institute of Geography and Geology, University of Greifswald, Partner in the Greifswald Mire Centre, Earth Observation and Geoinformatics Division (DIOTG), General Coordination of Earth Sciences (CG-CT), National Institute for Space Research (INPE)
Peatlands are the carbon-densest terrestrial ecosystem and play an important role for climate change mitigation. In the Brazilian Cerrado, the largest neotropical savanna region spanning over 2 million km², peatlands are part of an extensive biome-wide headwater network. They may store up to 13 % of the total Cerrado carbon stock (Above-ground and below ground carbon, soil carbon) on less than 1 % of the total area and they play a crucial role in supplying water to Brazil’s main river systems. Palm swamp savanna (Veredas), wet grasslands (Campos limpos úmidos) and gallery forests (Matas de Galeria) are typical vegetation types on organic soils. Land degradation following agricultural land use and climate change leads to direct and indirect negative impacts and degradation of peatlands and other wetlands in the Cerrado. Degradation includes drying, soil degradation and carbon loss due to fire, vegetation change and erosion, amongst others. Having a good understanding of peatland and permanent wetland distribution is essential for all further assessments that are urgently needed in that context, e.g. regarding carbon stocks, carbon emissions and degradation. However, existing uncertainties in wetland distribution and area are based on the challenging delineation of wetlands. The classification of wetland types of palm swamp savanna, wet grassland and/or gallery forest constantly yields lowest accuracies in a range of recent Cerrado-wide land cover and change mapping approaches that used machine learning (ML), e.g. the Mapbiomas or FIP Cerrado, especially because of small and thin patches of the wetland classes and gradual transition in-between them. Using statistical metrics that depict seasonal variations in reflectance patterns of vegetation from satellite time series have proven very efficient for land cover mapping and monitoring. Deep learning (DL) algorithms have been demonstrated very effective in wetland remote sensing with even higher accuracies than regular ML approaches by considering spatial patterns. Bendini et al. (2021) confirmed the potential for Cerrado wetland mapping in a first study by using freely available Sentinel-2 (S2) inputs into the DL network and showing added terrain information to improve results. Building up on that, we combined spectral metrics that are moisture sensitive and reflect seasonal variability, with terrain information in a U-Net, which is a well-established Convolutional network for segmentation tasks. The model was trained and tested at the Jalapão region, eastern Tocantins state, with the protected ecological station Serra Geral do Tocantins (A1) at the centre. To assess the model’s improvement and transferability, we further applied the trained model to two other regions in the Cerrado. These regions cover parts of southwestern Bahia, northeastern Goiás and northwestern Minas Gerais with the National Park Grande Sertão Veredas at its centre (A2) and parts of the northern Goiás state with National Park Chapada dos Veadeiros (A3) at its centre. All available S2 scenes for the year 2021 were downloaded and processed to Analysis-Ready-Data with the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE), which included cloud detection, co-registration, radiometric correction, resolution merge and data cubing Medians were then calculated for all bands and a set of indices). Two combinations of bands were tested, 1) NIR, NDWI, MNDWI and slope (MNMs) and 2) with NIR, NDWI, MNDWI, NDMI and slope (MSMNs). For both combinations, a stack of wet/dry season medians (2nd quarter/August-September) and yearly quartiles was created resulting in four different datasets with stacked bands (MNMs_yearly/ MNMs_seasonal; MSNMs_yearly/ MSNMs_seasonal). We mapped two permanent wetland types that occur in valleys of the Cerrado depending on the geomorphologic development stage of the valley and water availability. Class 1 includes wet grass-/shrubland or swamp savanna, also called Vereda in portuguese. This class is characterized by grasses and sedges, herbaceous plants and/ or shrubs, all adapted to temporary to permanent water saturation of the soil. The palm M. flexuosa is often characteristic of these swamp savannas, but not necessarily present. Class 2 refers to gallery or riparian forests. These often occur in the central parts of the valleys along the running waters and can be temporary to permanently flooded and swampy. For model training all other land cover types were aggregated into a class “background”. The U-Net model was implemented in python using the Keras Tensorflow package. The original band stacks covering A1 were subsetted into a total of 2400 images of 256*256 pixels each. The dataset was randomly split into training and testing subsets that contained 80 % and 20 % of the samples, respectively. Data augmentation was applied to the training set. Categorical cross-entropy was used as a loss function with the Stochastic Gradient Descent (SGD) optimizer and the learning rate adaptation during the training. The classification of the two transfer areas A2 and A3 was validated with an independent data point collection derived from literature, field work, expert judgement of high-resolution satellite imagery and randomly created points over the PRODES land use map. All trained models produced high overall F1 scores of 0.97 on the testing dataset in A1 with very small differences between band combinations and time periods. MNMs combinations show higher recall and MSMNs combinations have higher precision for the wetland classes. This result aligns with the visual impression of less dryland area being misclassified as wetland with MSMNs classifications, while stack 1 classifications cover more of the actual wetland area. Based on the testing dataset, all models performed better in classifying wetland classes 1 and 2 compared to the Mapbiomas (F1 score: 0.95) and FIP Cerrado maps (F1 score: 0.91). Visual inspection shows a higher spatial homogeneity and consistency of outlined wetland areas of this study compared to Mapbiomas and FIP Cerrado land cover products. The transfer of the trained models and validation of classifications results in consistent delineations of wetland areas in A2 across models. F1 scores are slightly lower with 0.9-0.91. Certain pivot irrigation systems are misclassified as gallery forest by the seasonal models. Yearly variation metrics show this misclassification pattern to a significantly lower extent. Forest plantations are misclassified by all models as gallery forest, but misclassification is lower in the seasonal season models. Visual inspection confirms the results from the training region that seasonal models delineate better the wetland areas themselves, while the yearly metrics models show a tendency to rather classify less of the actual wetland areas, appearing to be more conservative. The outlined wetland area in A3 is significantly varying between classes and models and the actual extent of class 1 seems to be constantly overestimated. However, it appears that classification accuracy decreased in regions with different environmental parameters (e.g. soil types/ vegetation). The Cerrado is subdivided into 19 ecoregions. Fine-tuning of the model with regional training data would improve segmentation results and would allow a more accurate wetland mapping across the Cerrado. We were able to map wetlands using a U-net model with high accuracies that reduced only slightly when being applied to other regions. Both wetland types were delineated more consistently and more accurately than existing land cover products do.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Monitoring Peatland Water Table Depth In Scotland Using Sentinel-1 SAR Data and Machine Learning

Authors: Dr Morgan Simpson, Dr Armando Marino, Dr Cristian Silva-Perez, Professor Peter Hunter, Professor Jens Subke-Arne
Affiliations: University Of Stirling, Keen AI
Peatlands are an important ecosystem for regulating global carbon emissions due to their ability to both sequester and store carbon. One fifth of Scotland is comprised of peatland ecosystems. However, approximately 80% of peatlands in Scotland are degraded, which in turn causes CO2 emissions. Typically, measurements of water table depth (WTD) and soil moisture (SM) are used for understanding healthiness of, and restoration impacts on peatlands. Traditionally, these measurements of WTD and SM are undertaken with field-based measurements. However, these methods are time-consuming, costly and also, the spatially complex and variable nature of peatlands and their hydrological regimes presents a challenge for acquiring representative data. Peatlands often cover large expanses, but water table depths can vary over relatively small spatial scales making it difficult to obtain representative measurements of change at landscape or ecosystem level. Synthetic Aperture Radar (SAR) is a coherent microwave imaging method, capable of monitoring in near all-weather conditions, regardless of light conditions and cloud cover. SAR is particularly sensitive to surface roughness, target geometry and the dielectric properties of the target, which can all be utilised to derive water content. This study utilises Sentinel-1 SAR data, in combination with a gradient boosting machine learning model to estimate water table depth across multiple peatland sites in Scotland. Gradient boosting works by sequentially adding predictors to an ensemble, each one correcting its predecessor. The full sentinel-1 archive of Single Look Complex (SLC) imagery from 2015 – 2024 was used for this study. An ancillary dataset was also used to obtain other metrics for machine learning input and validation, this included: Copernicus European Centre for Medium-Range Weather Forecast Reanalysis v5 (ERA5) data and in-situ readings from data loggers that have measurements up to 2022. The in-situ loggers were located across multiple peatland sites with varying water table depths and vegetation characteristics. The splitting of training / test data for the model was investigated via multiple methods. Randomly splitting the datasets for training and validation resulted in overoptimistic results. Splitting the data geographically caused there to a discrepancy between the number of loggers based on site locations (i.e. Sites with 30 loggers in-situ vs sites with 1 logger). The best method for splitting the dataset was temporally at fixed time intervals. Data from before 1st June 2020 was used for training and data after for testing. This method reduced the bias of randomly splitting data and provided the model with as much data diversity as possible. After utilising various metrics from ERA5 data (including total evaporation, volumetric soil water layer, and leaf area index), results show that our machine learning method can provide accuracies of ~80% for water table depth estimation, dependent on the study site. With the highest errors being observed at either very low (low surface connectivity) or high (surface inundation) water table depths. Importantly, the model performed robustly when applied to the large number of peatland sites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: From the Arctic tundra to temperate peatlands: Improving net ecosystem CO₂ exchange modelling for Irish peatland ecosystems

Authors: Dr. Wahaj Habib, Dr. Marco Girardello, Dr. Matthew Saunders, Dr. John Connolly
Affiliations: School of Natural Sciences, Geography Discipline, Trinity College Dublin, School of Natural Sciences, Botany Discipline, Trinity College Dublin
Peat soils cover about 23% of the terrestrial landscape and account for almost three-quarters of Ireland’s soil organic carbon (SOC) stock. Given their significant role in carbon storage, sequestration and, in turn, climate regulation, understanding the carbon dynamics of these ecosystems is crucial. This highlights the need for accurate, scalable models to estimate carbon dioxide (CO₂) Net Ecosystem Exchange (NEE) across these ecosystems. These models are also crucial for understanding correlations among climatological, environmental, and biophysical factors. To address this, this study builds on prior research that modelled NEE in the Arctic tundra using air temperature, Leaf Area Index (LAI), and Photosynthetically Active Radiation (PAR) as key drivers. Our work refines these methods and extends the model's application to Irish peat soils. The primary objective is to enhance the accuracy of the original Arctic NEE model (PANEEx) and upscale it using high-resolution satellite data. The refined model incorporates Sentinel-1 and Sentinel-2 satellite data alongside Moderate Resolution Imaging Spectroradiometer (MODIS) PAR estimates to assess NEE dynamics across Ireland’s peatlands. The model parameterisation is informed by Light Response Curve (LRC) metrics derived from in situ Eddy Covariance Flux Tower (ECFT) measurements. Preliminary results suggest that high-resolution remote sensing offers a more accurate representation of Ireland’s spatial variability in CO₂ exchange within peat soil ecosystems. These findings underscore the potential of remote sensing tools to monitor and report on Ireland’s peat soil carbon fluxes, aligning with national and international climate targets. The findings from this study will offer valuable insights for ecosystem monitoring, reporting, and policy, supporting climate targets by informing sustainable management of various ecosystems and verifying carbon budgets in line with national, European Union, and international climate commitments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Close range hyperspectral estimation of northern peatland moisture content across climate zones and trophic levels

Authors: Susanna Karlqvist, Jussi Juola, Aarne Hovi, Sini-Selina Salko, Iuliia Burdun, Miina Rautiainen
Affiliations: Aalto University
Peatlands play an important role in the global carbon cycle despite their limited geographic extent, with northern peatlands alone storing nearly twice as much carbon as all global living forests combined. These crucial ecosystems depend heavily on waterlogged conditions and moisture levels to maintain their carbon-storing capacity. However, peatland moisture conditions face increasing threats from anthropogenic activities, such as drainage for agriculture and forestry, and climate change-induced alterations in temperature and precipitation patterns. These threats risk transforming peatlands from carbon sinks into carbon sources, highlighting the critical importance of moisture monitoring for identifying vulnerable areas and evaluating restoration efforts. While satellites and airborne sensors can provide extensive coverage of remote peatland regions, they require detailed ground-level validation to achieve their full potential. This validation, achieved through precise close-range reference measurements, has become particularly important with the advent of new hyperspectral satellite missions such as CHIME. Reference data can be acquired through both laboratory measurements of key peatland species, such as Sphagnum mosses, and field measurements, thereby enabling enhanced monitoring of peatland moisture dynamics. Previous research on estimating peatland moisture content or water table levels has often been limited in scope, typically focusing on either few isolated sites or a narrow range of peatland species. Few studies have evaluated optimal remote sensing methods for accurate moisture assessment across diverse northern peatlands and species varieties. In our research, we tested methods to estimate peatland moisture content using close-range hyperspectral field measurements collected from 13 northern peatlands spanning from Hemiboreal (57.644°N) to Arctic regions (68.884°N). We complemented these measurements with a comparative analysis of moisture estimation methods for laboratory measured pure Sphagnum species. The laboratory study was conducted as a drying experiment, enabling measurements from a variety of moisture conditions. Our laboratory findings revealed that classifying Sphagnum species by habitat enabled more accurate moisture estimation, leading us to test estimation methods across trophic levels in our field data analysis. We examined multiple analytical techniques for moisture estimation, including spectral moisture indices, continuum removal, optical trapezoid model (OPTRAM), smoothed reflectance spectra, and continuous wavelet transformed spectra. Our results demonstrate that full reflectance and continuous wavelet transformed spectra show particular promise for moisture content estimation, while spectral moisture indices prove less reliable for detecting moisture levels across different peatlands and trophic levels.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: SAR and InSAR applied to temperate peatlands: new insights on links between remote sensing estimates and ecohydrological parameters

Authors: Alexis Hrysiewicz, Jennifer Williamson, Chris D Evans, Shane Donohue, A. Jonay Jovani-Sancho, Sam Dixon, Nathan Callaghan, Jake White, Justin Lyons, Joanna Kowalska, Hugh Cushnan, Eoghan P. Holohan
Affiliations: SFI Research Centre in Applied Geosciences (iCRAG), UCD School of Earth Sciences, UK Centre for Ecology & Hydrology, UCD School of Civil Engineering, School of Biosciences, University of Nottingham, Natural England, Natural Resources Wales, RPS Group
Peat soils are known to sequester vast quantities of carbon with 644 Gigatonnes (Gt), or 20-30 % of global soil carbon, stored in peat, despite covering only 3-5 % of the land area. In Europe, peat soils cover about 530,000 km² (5 %) and hold around 42 Gt of carbon. Links proposed recently between tropical peatland Greenhouse Gas (GHG) emissions and peat-surface displacements, as estimated remotely by Interferometry of Synthetic Aperture Radar (InSAR), could provide a basis for estimation of peatland GHG emissions on a global scale via low-cost remote sensing techniques. In addition, recent studies propose that maps and time series of apparent peatland surface motions derived from satellite-based SAR/InSAR are a proxy for ecohydrological peat parameters (i.e., groundwater level and soil moisture). However, links between SAR and InSAR estimates and peat ecohydrological parameters remain uncertain for temperate bogs, and until recently, there has been a lack of ground validation of these apparent surface motions at peatlands. The ESA Living Planet Followship project – named RaiPeat_InSAR – aimed to fill the knownledge gap via a systematic analysis of SAR/InSAR products from Sentinel-1 C-Band data (intensity maps, interferograms, coherence maps and temporal evolutions of displacements) for well-studied Irish and Bristish bogs. From various in-situ measurements (peat surface movement, groundwater levels, soil moisture, weather conditions, etc.), we analysed the linkages between SAR/InSAR estimates and ecohydrological peat parameters. In our first study, we demontrated that the InSAR-derived VV-polarisation coherence and displacements are not affected by vegetation changes caused by the wildfire in June 2019. In contrast, the VV-polarisation SAR intensity shows an increase, which can be linked to vegetation removal. In-situ data show that the InSAR coherence is directly related to soil moisture changes, from which it can be interpreted that the satellite-derived C-band radar waves penetrate through the 10-20 cm thick mossy vegetation layer and into the upper few cm of the underlying peat. In our second study, we show that InSAR-derived surface motions are very similar to peat surface displacements measured in-situ. A modified InSAR processing approach applied to ascending and descending acquisitions spanning May 2015 to September 2021 indicates that the peat surface of Cors Fochno (raised bog in UK) is subsiding at the centre and rising at the edges (-5 mm/yr to +5 mm/yr) while the peat surface of Cors Caron (raised bog in UK) is mostly subsiding (max. -8 mm/yr). Both bogs are also affected by annual surface level oscillations of 10-30 mm amplitude (known as “bog breathing”). The InSAR data capture well the amplitude and frequency of peat surfaces oscillations measured in-situ by using a novel camera-based method; with Pearson’s values are respectively >0.8 and < 5-7 mm. Furthermore, the InSAR-derived ground motions follow the in-situ measured groundwater table levels in a ratio of roughly 1: 10. InSAR-derived displacements therefore appear to be an efficient proxy of groundwater level changes. In our third study, we undertook a critical analysis of the capacity for upscaling of our results as supported by the recently released European Ground Motion Service (EGMS) of the Copernicus Land Monitoring Program. Although the displacement rates appear to be consistent with the in-situ data, we show that the EGMS results suffer from an underestimation of larger annual displacement oscillations (> ±20 mm). On blanket bogs, such displacement oscillations cannot be captured by the EGMS and site-scale InSAR datasets due to the very low amplitudes (< 5 – 10 mm) of the oscillations. On fens and associated agricultural peatland, EGMS and site-scale computations do not provide accurate displacement measurements, due to very low InSAR coherence and the high annual oscillation of displacement (> ±50 mm). However, EGMS-derived measurements, combined with site-scale computations, can enable monitoring of peat surface displacements on raised and blanket bogs at continental scale. Finally, the last study in the project involved proposing the first experiments to estimate carbon emissions by reconstructing groundwater levels from InSAR-derived displacements. For example, the ratio between peat surface displacement and groundwater level is 1:10 for the raised bogs studied (i.e. 1 mm of peat surface displacement induces 1 cm of change in groundwater level). Using empirical laws and/or machine learning techniques, our preliminary results show a higher Net Ecosystem Production rate – for 5 raised bogs – for 2018, 2020, 2021 and 2022 than for 2019, while the NEP rate was lower for 2016 and 2017. Overall, our studies confirms that SAR/InSAR products contains the keys for accurate continental scale monitoring of hydrologically driven surface motions of peat soils and thus a step towards estimating peat carbon emission from large-scale remote sensing from space.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assessing mire breathing patterns across Mecklenburg-Vorpommern, Germany using a Sentinel-1 SBAS approach

Authors: Luc Pienkoß, Philip Marzahn
Affiliations: University of Rostock
The characteristics and conditions of peatlands are crucial to assess the state of peatlands, especially to monitor their potential status of degradation. Mire breathing is a key process, serving as a proxy for the peatland’s degradation status. While drained, e.g. degraded, peatlands show a weak oscillation pattern with a general (nearly linear) subsidence trend, more natural peatlands show a more prominent mire breathing, resulting in seasonal cycles of uplift and subsidence rates. The quantification of peatland subsidence may provide critical insights into carbon storage dynamics and greenhouse gas emissions, thus making it a highly relevant topic for climate and environmental research. While for artificial surfaces such as urban areas, the retrieval of subsidence rates is well established, however for natural surface such as peatlands, the retrieval is hindered. This study examines the utilization of interferometric time-series analysis through the MintPy-SBAS approach with Sentinel-1 SAR data for the purpose of monitoring large-scale peatland subsidence. The methodology was applied to peatlands covering the whole area of the federal state of Mecklenburg-Vorpommern, Germany (North-East Germany) between 2017 and 2024. The findings illustrate the presence of spatiotemporal subsidence trends across the entire federal state over the period. The subsidence rates observed at three examination sites ranged from -4.32 to -9.61 cm per year in line of sight (LOS). Moreover, site-specific mire breathing patterns were identified, with amplitudes ranging from 5 cm to 15 cm in LOS. Seasonal variations in subsidence, characterized by increased subsidence rates during the summer months and partial recovery in wetter months, demonstrate the impact of hydrological changes on the dynamics of the subsidence patterns. The outcome of the study, demonstrates the efficacy of time-series analyses in capturing both long-term subsidence trends and short-term oscillatory responses, thereby contributing to the development of sustainable land management and carbon sequestration strategies. Nevertheless, further research is required to improve the reliability of the SBAS method and to validate its findings using robust and reliable in-situ subsidence data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Multi-Source Earth Observation Data for Assessing Hydrological Dynamics in Peatlands

Authors: M.Sc. Lena Krupp, Prof. Claas Nendel, B.A. Simon Seyfried, Gohar Ghazaryan
Affiliations: ​​​​​​​​​​​​​​​​​​​​​Leibniz Centre for Agricultural Landscape Research (ZALF), Institute of Biochemistry and Biology, University of Potsdam, Earth Observation Lab, Geography Department, Humboldt University of Berlin, Integrative Research Institute on Transformations of Human-Environment Systems (IRI THESys), Humboldt University of Berlin, Global Change Research Institute of the Czech Academy of Sciences
Peatlands are important carbon sinks and are estimated to store 30% of the world's soil organic carbon. This is due to their characteristically high water table, which prevents microorganisms from breaking down plant material and releasing CO2 into the atmosphere during the process. However, many have been drained over centuries to convert them into agricultural land, but also to extract peat for horticultural purposes or as fuel, making them net CO2 emitters. The rewetting of peatlands is a crucial component in the fight against climate change and is currently being heavily promoted in some countries. In Germany, for example, rewetting projects are being carried out as part of the national peatland conservation strategy which is included in the Federal Action Plan on Nature-based Solutions for Climate and Biodiversity. Effective rewetting efforts necessitate enhanced monitoring of peatland hydrological conditions, particularly soil moisture and WTD, which are fundamental to understanding peatland health and their climate-regulating functions. This study explores a data-driven approach to assess hydrological dynamics in peatlands using Sentinel-1, Sentinel-2, Landsat and in-situ WTD measurements from several degraded peatlands across the North-East of Germany. These areas have undergone diverse management practices, ranging from historical drainage for agriculture to recent efforts focused on ecological restoration through rewetting initiatives. Sentinel-1 SAR data provided backscatter information sensitive to surface moisture and vegetation structure, while Sentinel-2 and Landsat multispectral sensors offer valuable spectral indices, such as the Normalized Difference Vegetation Index (NDVI) or the Normalized Difference Water Index (NDWI) to monitor the vegetation response to moisture changes. Land surface temperature (LST) derived from Landsat thermal data was integrated to capture the surface energy balance dynamics. In addition, methods like the Optical Trapezoid Model (OPTRAM) were applied to estimate soil moisture and track the indirect relationships between soil moisture (SM) and WTD. Correlations between measured WTD and remote sensing parameters, VV, VH and ratio and spectral indices, were used to establish relationships reflecting peatland hydrology. These relationships were further utilized in a Random Forest machine learning model to predict WTD dynamics, combining SAR backscatter, spectral indices, thermal data, and OPTRAM-derived soil moisture. This approach allows for capturing non-linear interactions between variables and provides a robust framework for monitoring seasonal and inter-annual changes in peatland hydrology. The results reveal significant correlations between WTD and key remote sensing parameters, such as VV and OPTRAM in several peat sites, highlighting the potential of integrating SAR, optical, and thermal datasets with data-driven statistical and machine learning models for peatland monitoring. By offering a scalable methodology, this work supports rewetting initiatives, informing conservation strategies, and advancing climate change mitigation efforts. Future directions include refining model accuracy and expanding the approach to other peatland regions for broader applicability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: EO data for peatland monitoring: challenges and opportunities from multi-temporal SAR interferometry

Authors: Christian Bignami, Cristiano Tolomei, Lisa Beccaro, Stefano Salvi, Gerardo Lopez Saldana, Yara Al Sarrouh, Michel Bechtold, Kevin Tansey, Harika Ankathi, Susan Page, Fred Worrall, Arndt Piayda
Affiliations: Istituto Nazionale di Geofisica e Vulcanologia, Assimila Ltd, KU Leuven, Universtity of Leceister, Durham University, Thuenen Institute of Climate-Smart Agriculture
Peatlands cover only 3–4% of the World’s land area. Despite this minimal presence, peatlands are significant ecosystems, able to provide several ecosystem services, making their conservation and restoration critical for current and future generations. Indeed, peatlands play a crucial role in global environmental change processes, representing the most effective ecosystems for carbon storage. Pollution, urban development, and global warming severely affect these areas, causing high ecological stress. An increased recognition of the importance of these special habitats has encouraged the spread of monitoring and restoration studies through different approaches, from field surveys to laboratory analyses, and, ultimately, by remote sensing techniques. The present work illustrates the outcomes obtained during the ESA-funded WorldPeatland project, where various peatlands located in different Earth's climatic regions (temperate and boreal) have been studied using remote sensing imageries combined with data directly acquired in the field. In particular, we show the results concerning the exploitation of multi-temporal Interferometric Synthetic Aperture Radar (InSAR) methods, i.e., through Enhanced Persistent Scatterers and Small Baseline Subset techniques, applied to stacks of Sentinel-1 SAR data from 2021 to 2024. We produced ground displacement time series and mean velocity maps, allowing us to study the behaviour of the peatlands over time and to find possible correlations with available ground data such as water level, rainfall measurements, soil moisture, and other vegetation indexes obtained from optical satellite images. Moreover, an in-depth investigation has been carried out testing different processing settings to understand the scattering mechanisms responsible for the SAR signal response and the measured deformation in such challenging areas. Four peatlands have been studied in our work, in detail: Hatfield and Moor-House, in England, Gnarrenburg, in Germany, and Degero, in Sweden. The first three belong to temperate climate regions; the fourth represents the boreal climate environment. Two tropical peatlands have also been considered in the WorldPeatlands project. Unfortunately, the C-band data from Sentinel-1 mission were unsuitable for obtaining ground motion data, because of the dense forest canopy overlying the peatlands. Our findings confirm that natural peatlands in temperate regions are characterised by higher interferometric coherence since the vegetation is low and the water table level is not constantly above the surface. Therefore, the interferometric products present an acceptable spatial coverage and low noise time series. Instead, ground motion mapping in the Degero boreal peatland is more problematic, mainly because of snow cover during the winter season, which causes phase loss and discontinuity in the InSAR observations. Our results have been compared with the data provided by the European Ground Motion Service (EGMS) and validated against corner reflector data, where available. We confirmed the high quality of the measured mean ground velocity and deformation time series. Despite the good ground motion measurement accuracies, interpreting the ground motion signals is still challenging, given the incompleteness of the spatial coverage and the complex physical and biological processes acting in peatland environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Developing Spectral Indicators for the Monitoring of Re-wetted Peatlands

Authors: Ariane Tepaß, Dr. Marcel Schwieder, Christina Hellmann, Sebastian van der Linden, Dr. Stefan Erasmi
Affiliations: Thünen-Institut, Universität Greifswald
Peatlands play a pivotal role in climate regulation. However, over 95% of German peatlands have been drained, mainly for agricultural use, contributing significantly to greenhouse gas (GHG) emissions. Drained peatlands release substantial amounts of CO₂, nitrous oxide, and methane, accounting for approximately 7,5% of Germany’s total GHG emissions and 44% of all emissions from agriculture and agricultural land use. Rewetting drained peatlands is a necessary mitigation measure, with the goal to stop GHG emissions and the potential to transform them from substantial carbon sources to sustainable sinks. Nevertheless, improper rewetting may result in too wet or too dry conditions, which hamper a subsequent sustainable use of the areas or its carbon sink potential. To ensure the success of rewetting measures, it is crucial to implement systematic observation that captures changes in peatland vegetation and hydrological conditions. Vegetation, such as the plants Typha spp. (cattail) and Phragmites australis (common reed), can act as ecological indicators for peatland monitoring as they both reflect the impact of hydrological dynamics and restoration conditions. Thus, monitoring the success of rewetting measures requires consistent and accurate observation techniques of plant communities. We focus on mapping and monitoring the aforementioned key peatland plants based on Sentinel-1 and 2 satellite time series and wetland vegetation cover fractions derived from hyperspectral satellite data. Using spectral indicators, SAR backscatter and temporal trends, we aim to characterize spatial and temporal dynamics of these species and their phenology under different rewetting regimes. Our study area includes the Peene and Trebel river basins of the federal state of Mecklenburg-Vorpommern, Germany, with varying rewetting durations and intensities. The synergy of hyperspectral and Sentinel satellite data offers opportunities for monitoring and analyzing peatland vegetation dynamics. Hyperspectral sensors provide highly detailed spectral information, capturing fine-grained variations across numerous narrow bands, enabling the differentiation of vegetation types with similar spectral features. However, hyperspectral data is often more challenging and time-consuming to acquire or does not cover large areas. Conversely, Sentinel-1 and –2 sensors, with their high temporal and spatial resolution, enable frequent and large-scale observations, capturing phenological changes and dynamic processes over time. Sentinel-1, with its synthetic aperture radar (SAR), provides data regardless of weather conditions and Sentinel-2 delivers optical imagery across 13 spectral bands, ideal for capturing vegetation characteristics and phenological trends. Together, these datasets can bridge the gap between spectral precision and temporal-spatial coverage. In this study, we highlight the advantages of integrating data from both hyperspectral and Sentinel satellites to monitor the abundance and spatiotemporal dynamics of key peatland vegetation. We analyzed fractional cover maps of typical peatland vegetation types, which were derived by unmixing hyperspectral PRISMA datasets. The mixed hyperspectral signals were decomposed into constituent components, such as vegetation types or land use, utilizing machine learning models trained on a set of synthetically mixed training data. The fractions provide quantitative estimates of dominant vegetation types like Typha and Phragmites for each 30x30m pixel, offering insights into their distribution and density in the study area. Based on the resulting fractional cover map and the integration of pixels with a high fractional vegetation cover of Typha and Phragmites, we derived spectral-temporal metrics from time series of Sentinel-1 and 2 data. Initial results revealed distinct patterns of land surface phenology of regions dominated by Typha and Phragmites. This demonstrates the capability of Sentinel-data to differentiate between the two plant phenologies. These phenological shifts highlight differences in growth and senescence, which are related to hydrological and microclimatic conditions. In turn, these conditions can be influenced by rewetting intensity, duration and success. Such insights are essential for evaluating the effectiveness of peatland restoration and for refining strategies aimed at optimizing carbon sequestration. This work underscores the potential of combining multi- and hyperspectral as well as SAR backscatter and temporal indicators from satellite data to monitor vegetation dynamics in peatlands. Future research will focus on refining these indicators and exploring their scalability to other peatland restoration sites.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Temporal Analysis and Multi-Dimensional Fusion for Advanced Monitoring of Peatland Degradation

Authors: Harsha Vardhan Kaparthi, Dr. Alfonso Vitti
Affiliations: Sapienza Università di Roma, Università degli studi di Trento
The study presents a cutting-edge approach to monitoring peatland degradation through temporal analysis and multi-dimensional data fusion, providing a holistic framework for informed conservation efforts. By integrating spectral, Synthetic Aperture Radar (SAR), and LiDAR datasets, we employ advanced deep learning models to capture and analyze the complex dynamics of peatland ecosystems over time and across dimensions. Temporal trends in vegetation health, soil moisture, and degradation patterns are analyzed using recurrent neural networks (RNNs) and Temporal Convolutional Networks (Temporal CNNs). These models reveal long-term changes and seasonal variations, highlighting critical indicators of progressive degradation or recovery. Deep fusion models further enhance this analysis by integrating spectral, SAR, and LiDAR data, creating a comprehensive 3D view of peatland conditions. This fusion effectively combines spatial, spectral, and elevation information, providing unparalleled insights into the interactions among ecosystem components. This multi-dimensional framework is validated across diverse climate zones, demonstrating its adaptability to boreal, tropical, and temperate peatlands. The results showcase the effectiveness of combining temporal analysis with multi-source data fusion to support targeted interventions and sustainable management strategies. Our approach offers a robust toolkit for ecological monitoring, enabling high-resolution, spatio-temporal insights essential for the preservation and restoration of these critical carbon-storing ecosystems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrated indicators for monitoring peatland condition using multitemporal trends.

Authors: Mr Gerardo Lopez Saldana, Yara Al Sarrouh, Sam Doolin, Michel Bechtold, Stefano Salvi, Christian Bignami, Cristiano Tolomei, Lisa Beccaro, Susan Page, Fred Worrall, Kevin Tansey, Harika Ankathi, Ian Jory
Affiliations: Assimila, KU Leuven - Dept of Earth and Environmental Sciences, Istituto Nazionale di Geofisica e Vulcanologia, University of Leicester, Durham University
The ESA WorldPeatland project, a collaborative effort to enhance peatland mapping and monitoring, focuses on developing Earth observation (EO) tools to address the needs of various stakeholders. The project recognizes the significance of integrated indicators derived from multitemporal trends of hydrology, surface motions, and vegetation biophysical parameters to assess peatland condition. Integrated indicators are crucial for understanding the complex interplay of factors influencing peatland health. These indicators provide insights into the effectiveness of restoration efforts, track the impact of disturbances such as wildfires, and offer simplified assessments for a wide range of stakeholders. WorldPeatland aims to develop indicators that are sensitive to change, representative of diverse biomes where peatland is present, and offer leading insights to support proactive management decisions. The project leverages multitemporal EO data from various sources to derive these integrated indicators. Hydrological variables, such as water table depth, are essential for evaluating the efficacy of rewetting measures, providing early warnings of degradation, and assessing fire danger. WorldPeatland utilizes Sentinel-1 synthetic aperture radar (SAR), Sentinel-2 optical imagery, and the SMAP (Soil Moisture Active Passive) Level-4 soil moisture product to derive the peatland hydrology monitoring component. Monitoring peatland surface motion is critical for estimating carbon accumulation or loss, supporting GHG emission reporting as per IPCC guidelines, and assessing peatland health using water-level-dependent ground surface fluctuations. WorldPeatland employs Multi-Temporal InSAR techniques, specifically the Enhanced Persistent Scatterers (E-PS) and Intermittent Small Baseline Subset (ISBAS) algorithms, to measure ground motion, ensuring a balance between accuracy and spatial coverage over these challenging surfaces. Vegetation biophysical parameters, such as land surface temperature (LST), albedo, and Leaf Area Index (LAI), play a crucial role in assessing peatland function and monitoring vegetation changes over time. WorldPeatland utilizes long-term data records from MODIS and VIIRS, complemented by higher resolution Sentinel-1 and Sentinel-2 data, to track changes in these variables, supporting the assessment of peatland function and restoration progress. The open-source version of the Carbon Durham Model will be used to to provide insights about the carbon budget over peatlands areas. The integration of multitemporal datasets enables the development of comprehensive indicators of peatland condition. WorldPeatland aims to create indicators that will use a combination of multi-variable temporal trends and standardise anomalies that reflect peatland dynamics from a holistic perspective rather than focusing only on individual components. The time series will be detrended to standardise the time series by removing the influence of seasonal variations and highlighting interannual variations that are not related to typical seasonal dynamics. On the detrended time series, statistical trend metrics will be created to determine whether a trend exists in the time series of each monitoring variable. Then the trends of all variables can be combined to obtain an overall understanding of the dynamics of the area of study. Climatological averages are also calculated to capture the average behaviour of a variable over time. Then standardise anomalies will be generated, this will help to determine how far from the average behaviour is a specific variable at a particular point in time. By combining standardise anomalies for all variables it is possible to develop indicators that are sensitive to change and representative of different peatland types. These indicators will be accessible through user-focused online portals and tools, ensuring their applicability for a broad range of stakeholders, including scientists, policymakers, and restoration practitioners. By integrating multitemporal trends of hydrology, surface motions, and vegetation biophysical parameters, WorldPeatland strives to provide robust, tools and indicators that support informed decision-making for peatland conservation, restoration, and sustainable management.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping Global Organic Soils Drainage and Emissions: Leveraging Earth Observation-based Geospatial Data with an Intergovernmental Panel on Climate Change Framework

Authors: Erin Glen, David Gibbs, Melissa Rose, Angela Scafidi, Nancy Harris, Benjamin Wielgosz
Affiliations: World Resources Institute
Peatland drainage contributes approximately 6% of global greenhouse gas (GHG) emissions, yet it remains underrepresented in many national and regional GHG inventories. While peatland drainage and degradation have occurred in the Northern Hemisphere for centuries, drainage in tropical regions has accelerated rapidly in the 21st century. Over the past three decades, the majority of Southeast Asia’s 25 million hectares of tropical peatlands have been deforested and drained, leading to significant GHG emissions (Hoyt et al., 2021). The degradation and conversion of peatlands not only release substantial carbon emissions but also disrupt critical ecosystem services and increase the likelihood of catastrophic peat fires. Despite their importance, existing datasets delineating peatland drainage are sparse, coarse, or localized, and no comprehensive global dataset currently maps peatland extent, drainage, and associated emissions. We address this gap by leveraging a recently developed 30-meter global organic soils map with the best available regional and global contextual data for estimating drainage and emissions. Using a geospatial data integration framework, we estimate emissions from peatland drainage from 2000–2020, following Intergovernmental Panel on Climate Change Wetlands Supplement (2013) guidelines. Our first iteration employs IPCC Tier 1 emission factors combined with spatial data on climate zones, soil nutrient status, land cover change, plantation types, drainage infrastructure, road networks and peat extraction areas to delineate global peatland drainage and quantify associated emissions. The resulting product is a global 30-meter resolution map characterizing drainage and conversion types and their emissions from 2000–2020. Because peatland definitions vary across geographies and jurisdictions, we provide results for all organic soils and empower users to subset this data to local peatland definitions. This flexible modeling framework is designed for iterative updates, incorporating improved datasets and refined methodologies for estimating GHG emissions from peatland drainage. Potential advancements for future iterations include assessment of drainage infrastructure intensity, incorporation of IPCC Tier 2 methods and emission factors, temporal refinement within the 2000-2020 period, and expansion of coverage beyond 2020. By providing a 30-meter resolution, globally consistent dataset, this work supports efforts to monitor, manage, and restore peatlands, offering critical insights for addressing climate change and preserving ecosystem services.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Northern Wetland Classifications and Carbon Cycle Applications: Translating Concepts Into Spatial Data

Authors: Marianne Böhm, Prof. Gustaf Hugelius, Prof. Stefano Manzoni
Affiliations: Stockholm University, Bolin Centre for Climate Research
Despite progress in the research on Arctic and Boreal carbon fluxes, there is still large uncertainties in carbon budget estimates. Improved land cover mapping is needed to decrease these uncertainties. Issues with applications of current maps for carbon budgets include class differentiation, scale issues and double counting. This introduces errors in upscaling of measured greenhouse gas emissions and—as a result—in development, parametersation and evaluation of models. In particular, global land cover maps have poor mapping accuracy for northern wetland ecosystems. Wetlands are prevalent at high latitudes, and are key players in the carbon cycle given their high vulnerability to climate change. Despite their central role, commonly used wetland maps are often spatially coarse, suffer from a lack of thematical detail, or do not draw class borders along differences in the carbon cycle. This contribution presents results from a review of national wetland classification and inventory systems across the Arctic-Boreal region. Specifically, we explore which distinctions are made in existing systems, and how they could be applied at different scales for carbon-cycle applications across the Arctic-Boreal domain. Furthermore, we identify which data inputs would enable a success of this effort, point to gaps in the existing datasets, and open the discussion on how to fill them.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Integrating Sentinel-1, Sentinel-2, and SMAP Level-4 Soil Moisture Data for Peatland Hydrology Monitoring

Authors: Michel Bechtold, Kevin Tansey, Harika Ankathi, Gerardo Lopez Saldana, Yara Al Sarrouh, Iuliia Burdun, Lucas Boeykens, Ullrich Dettmann, Fred Worrall, Gabriëlle De Lannoy
Affiliations: KU Leuven, Universtity of Leceister, Assimila Ltd, Aalto University, Thuenen Institute, Durham University
Peatlands play a critical role in global carbon and water cycles as well as regional ecosystem services. However, monitoring peatland hydrology remains challenging due to the complex surface properties and hydrodynamics in these areas. This study presents an integrated approach combining Sentinel-1 synthetic aperture radar (SAR), Sentinel-2 optical imagery, and the SMAP (Soil Moisture Active Passive) Level-4 soil moisture product to enhance peatland hydrology monitoring. The approach leverages the peatland-specific hydrological output of the SMAP Level-4 soil moisture (SMAP SM_L4, Reichle et al. 2023) product which includes a specialized model parameterized for peatland processes (PEATCLSM, Bechtold et al. 2019). Sentinel-1 and Sentinel-2 are employed to downscale the 9 km resolution SMAP L4_SM global hydrological estimates to a finer spatial resolution of 100 m, improving their applicability to monitor spatial variability within and across specific peatlands. To address the complexity of backscatter-to-water level relationships in Sentinel-1 data, the SMAP L4_SM product is utilized to resolve ambiguities. In particular, increasing backscatter occurs with subground water levels below ground, while backscatter was mostly found to decrease with increasing inundation fractions due to specular reflection. A change detection approach using SMAP L4_SM identifies the water level regime, enabling the assessment of inundation durations and periods when backscatter can track subsurface water level variations. The optical trapezoid model (OPTRAM) is applied to Sentinel-2 data at 20 m resolution. At this resolution, the SMAP L4_SM product is used to identify the pixels of the highest soil moisture sensitivity. These pixels are then used to aggregate the soil moisture index to the same resolution as the Sentinel-1 data. Both soil moisture indices are rescaled to the peatland-specific variables of the SMAP L4_SM product. In the last step, a bias correction is performed ensuring that the total time of inundation indicated by the product matches the ones derived from the Sentinel-1 data. The product will be provided with uncertainty information for each pixel. The downscaled datasets are validated across boreal, temperate, and tropical peatlands using time series of in situ water level data and surface water maps from high-resolution optical imagery. Preliminary validation results highlight considerable spatial variability in the skill of the new product. We discuss how this variability correlates with site characteristics and uncertainty estimates of the product. Our approach targets a scalable and transferable method for monitoring peatland hydrology, addressing critical needs in management and conservation. Understanding hydrological state variables is essential due to their primary role in regulating ecosystem services. While SMAP L4-SM may not be directly useful for stakeholders at the management scale, the downscaled product holds significant potential for management applications. This method could become an operational tool for researchers and practitioners across diverse peatland research and application fields. This work is part of the ESA WorldPeatland project. References: Bechtold, M., De Lannoy, G. J. M., Koster, R. D., Reichle, R. H., et al.: PEAT-CLSM: A specific treatment of peatland hydrology in the NASA Catchment Land Surface Model, Journal of Advances in Modeling Earth Systems, 11, 2130–2162, 2019. Reichle, R. H., Liu, Q., Ardizzone, J. V., Bechtold, M., Crow, W. T., De Lannoy, G. J. M., Kimball, J. S., and Koster, R. D.: Soil Moisture Active Passive (SMAP) Project Assessment Report for Version 7 of the L4_SM Data Product, NASA Technical Report Series on GlobalModeling and Data Assimilation, 64, 87pp, 2023.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Four decades of peatland monitoring (1985-2022) in the Baltic Sea region based on extended annual land cover products from a Landsat and Sentinel-2 data cube

Authors: Sebastian van der Linden, Viet Nguyen, Vu Dong Pham, Cosima Tegetmeyer, Fabian Thiel, Farina de Waard, Alexandra Barthelmes
Affiliations: University of Greifswald, Institute of Geography and Geology, Partner in the Greifswald Mire Centre, University of Greifswald, Institute of Botany and Landscape Ecology, Partner in the Greifswald Mire Centre
Peatlands store more carbon than any other ecosystem, but their drainage and the extraction of peat have caused severe degradation, e.g. in northern Europe’s temperate and boreal climate zones. Peatland degradation causes huge greenhouse gas (GHG) emissions, land surface subsidence, water eutrophication and biodiversity loss. Though 500,000 km² of degraded peatlands only cover 0.3% of the Earth's total land area, they contribute a disproportionate 5% of global GHG emissions. Both, the extraction of peat from drained peatlands and the change of land use towards agriculture or forestry on the formerly wet land cause such GHG emissions. Nowadays, mitigating GHG emissions from peatlands by ecological restoration or sustainable agricultural use under wet conditions receives increasing attention. However, for successfully doing this, current and past use of the peatlands and the land use change trajectories need to be understood. Earth observation (EO) can substantially support this. Mapping current and past peatland degradation and monitoring the effects of peatland restoration requires land cover (LC) products with high spatial and temporal resolution and a very high level of thematic detail. The Baltic Sea Region Land Cover plus (BSRLC+) product (Pham et al., Sci. Data, 2024, DOI: 10.1038/s41597-024-04062-w) provides such information beyond most other available Earth observation products. It covers the Baltic Sea region (BSR), i.e., Denmark, Estonia, Latvia, Lithuania, the north of Poland and Germany, the south of Sweden and Finland plus coastal regions of Russia. BSRLC+ has 30 m spatial resolution with annual temporal resolution between 2000 and 2022 and tri-annual resolution from 1985-1997. It extends the class schemes of regular large area LC products such as World Cover by 8 crop types and two peatland classes: exploited bog and unexploited bog. Based on this unique data set we performed a comparative study of land cover changes for peatland in countries within the BSR. We used the polygons from the Global Peatland Map 2.0, analysed and quantified (i) the LC change trajectories from and to peat extraction in bogs as well as the duration of extraction periods, and (ii) the LC trajectories for drained peatland areas under agricultural land use. With our work, we showcase how EO data help monitoring the impact of land use changes in peatlands and this way support information for restoration efforts, e.g., under the EU’s new Nature Restoration Law.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.03.06 - POSTER - Exploring ground-based, airborne and satellite observations and concepts for the carbon cycle

The remote sensing community is abuzz with developing innovative concepts for the carbon cycle to collect crucial data at different spatial and temporal scales required to study and improve understanding of underlying geophysical processes. The observations from new airborne and ground-based instruments play a vital role in developing new applications that benefit from integrated sensing.

These new concepts need to go hand in hand with the mathematical understanding of the theoretical frameworks including uncertainty estimates. This session invites presentations on:
- innovative observations of geophysical products focussing on the carbon cycle
- Highlighting innovative applications based on integrated sensing
- feedback and lessons learned from ongoing or planned developments as well as from first ground-based or airborne campaigns
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Impact of cluster configuration of forest inventory plots on representing AGB density within map units

Authors: Dr. Natalia Málaga, Dr. Sytze de Bruin, Dr. Andrew J. Lister, Dr. Daniela Requena Suarez, Dr. Arnan Araza, Martin Herold
Affiliations: Helmholtz Center Potsdam German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research, USDA Forest Service, Environmental Systems Analysis, Wageningen University and Research
While National Forest Inventories (NFIs) serve as the primary data source for country-level forest aboveground biomass (AGB) estimates, many tropical countries still face challenges in completing and updating their inventories. Meanwhile, advancements in remote sensing-based biomass products, combined with future satellite missions, provide new avenues for overcoming these challenges. These innovations enable the integration of space-based biomass maps with ground-based information in favour of supporting forest AGB estimation, a critical component of greenhouse gas (GHG) mitigation and adaptation strategies. However, integrating biomass maps with NFI information poses several challenges, which include handling differences between the spatial support of the field-based sampling units and the map units. This study assesses the degree to which six spatial plot configurations commonly used in tropical NFIs (two single plots and four common NFI cluster plot designs) characterize mean AGB density within fixed-size rectangles representing biomass map units in a tropical and a temperate forest site. Employing a discrete bottom-up modelling approach by means of a hierarchical marked point process (HMPP) framework, we simulate forest AGB densities by accounting for tree-tree and other ecological interactions that influence the spatial distribution of trees within forest stands. These include asymmetric competition between larger trees and smaller trees, and clustering of trees due to environmental conditions and natural disturbances. Our results show that the spatial configuration of cluster plots impacts the accuracy and precision of AGB density estimates within map units. Notably, cluster plot configurations consistently outperformed single plots of equal size (0.5 ha), offering enhanced precision by capturing a wider range of AGB variability within map units. For both sites, AGB was found to be spatially structured as opposed to completely random. We also found that the L-shaped cluster configuration can lead to selection-bias in the case of monotonic spatial AGB trends. Our study contributes to understanding the impact of plot spatial configuration on map-to-plot intercomparison analyses, which are essential to any application integrating ground-based information with remote sensing-derived products. Insights derived from the study could inform the design of future ground-based campaigns.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Tree level biomass through self-supervised reconstruction of ALS point clouds: Application to monospecific French forests.

Authors: Alvin Opler
Affiliations: Laboratoire des Sciences du Climat et de l'Environnement, LSCE/IPSL
Individual tree resource monitoring and forestry ecosystems assessment have conventionnally been possible through the use of detailed plot-scale data [1]. Current state-of-the-art methods [2,3] have alleviated these dependencies using deep-learning on air/spaceborne imagery. These models learn to reproduce human-labelled segmentation while using field data only for validation purposes. Nevertheless, the manual labels used are variable in term of qualities and are often not precise enough in dense forests [4]. While the literature extensively uses imagery for individual tree detection (ITD) tasks, the use of Lidar data is unconventional. Indeed, the existing works often project concurrent point cloud data to 2d dimensional input [2, 5] to be used along aerial images for further predictions. While the full 3d structure is used in several studies, the models of the literature require a high point density and manually segmented tree dataset [6,7] which is not suitable for large scale studies. In this work, we present a fully-autonomous framework which learns to segment individual trees using solely airbone lidar. We leverage the full potential of 3d data using state-of-the-art transformer architecture [8] and reconstruction methods [9]. The model first learns a wide range of deformable tree prototypes from the FOR-species-20k [10] dataset in order to fit the former to existing point clouds. This deep-learning procedure enables segmentation across a wide range of regions while requiring few or none manual labeling. This framework further allows to extract key features of individual trees such as its volume, crown area, species and carbon stock. As a case study, we apply our model to the PureForest dataset [11], a French large-scale dataset of monospecific plots initially created for species classification. As a first downstream task, we create high-quality segmentation labels and a precise description of carbon stock distribution in dense forests through the whole dataset (339 km2). Furthermore, we show several applications of such dataset such as biomass and height estimation, competition indices, fire spreading modelling and others. Our results highlight the importance of very-high resolution lidar data to access local above ground biomass features. [1] Pellissier-Tanon, A et al. (2024) Combining satellite images with national forest inventory measurements for monitoring post-disturbance forest height growth. Front. Remote Sens. 5:1432577. doi: 10.3389/frsen.2024.1432577 [2] Sizhuo Li, et al. Deep learning enables image-based tree counting, crown segmentation, and height prediction at national scale, PNAS Nexus (2023), https://doi.org/10.1093/pnasnexus/pgad076 [3] Brandt, M., Chave, J., Li, S. et al. High-resolution sensors and deep learning models for tree resource monitoring. Nat Rev Electr Eng (2024). https://doi.org/10.1038/s44287-024-00116-8 [4] Veitch, J et al., OAM-TCD: A globally diverse dataset of high-resolution tree cover maps, The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2024) https://openreview.net/forum?id=I2Q3XwO2cz [5] Roussel, J.R. et al. (2021). lidR : An R package for analysis of Airborne Laser Scanning (ALS) data. Remote Sensing of Environment, doi:112061.10.1016/j.rse.2020.112061 [6] Maciej Wielgosz, et al. SegmentAnyTree: A sensor and platform agnostic deep learning model for tree segmentation using laser scanning data, Remote Sensing of Environment (2024), https://doi.org/10.1016/j.rse.2024.114367 [7] Binbin Xiang, et al. Automated forest inventory: Analysis of high-density airborne LiDAR point clouds with 3D deep learning, Remote Sensing of Environment, https://doi.org/10.1016/j.rse.2024.114078 [8] Robert, D et al. Scalable 3D Panoptic Segmentation as Superpoint Graph Clustering, Proceedings of the IEEE International Conference on 3D Vision (2024) https://drprojects.github.io/supercluster [9] Loiseau, R et al., Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans, CVPR (2024) https://arxiv.org/abs/2304.09704 [10] Puliti, S et al. Benchmarking tree species classification from proximally-sensed laser scanning data: introducing the FOR-species20K dataset. 10.48550/arXiv.2408.06507. [11] Charles Gaydon and Floryne Roche, PureForest: A Large-Scale Aerial Lidar and Aerial Imagery Dataset for Tree Species Classification in Monospecific Forests (2024) https://arxiv.org/abs/2404.12064
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: High-Resolution Gross Primary Productivity Estimation from the Synergy of Sentinel-2 and ERA5

Authors: Emma De Clerck, Dávid D.Kovács, Pablo Reyes-Muños, Dr. Jochem
Affiliations: Universitat de València
Background: Accurate, high-resolution Gross Primary Productivity (GPP) estimation is essential for understanding the carbon cycle, as GPP represents the total carbon dioxide uptake by vegetation through photosynthesis. This metric provides key insights into ecosystem carbon sequestration, a critical factor in assessing terrestrial carbon sinks and sources. Monitoring GPP at fine spatial scales allows for detailed understanding of carbon flux variations, which can influence regional and global carbon budgets. Leveraging remote sensing data such as Sentinel-2 and integrating climate reanalysis data from ERA5, accessible on platforms like Google Earth Engine and OpenEO, opens up new possibilities for scalable, landscape-level GPP monitoring. However, traditional models lack broad applicability across diverse vegetation types and geographic regions due to limited resolution and site-specific constraints, making it challenging to derive GPP estimates that are both detailed and widely applicable. Objective: This study aims to create a high-resolution GPP estimation model by combining Sentinel-2 multispectral satellite data with ERA5 Land climate reanalysis data, focusing on delivering an adaptable and reliable model for different plant functional types (PFTs). The model is validated using in situ observations from ICOS flux tower sites across Europe, ensuring relevance and robustness across diverse landscapes. This approach seeks to improve the accuracy of GPP predictions for applications in carbon cycle research and policy development. Methods: Several methodologies are under consideration for achieving the study objectives. These include using Radiative Transfer Models (RTMs), data-driven approaches, or a hybrid method combining simulation data and real-world observations. The use of tools such as SCOPE (Soil Canopy Observation of Photosynthesis and Energy fluxes), machine learning techniques such as Gaussian Processes Regression, and ICOS flux tower data is being explored to enhance the accuracy and adaptability of the models. Sentinel-2’s high spatial resolution and ERA5’s meteorological insights provide a robust basis for location-specific GPP predictions. The final methodology will aim to balance precision, scalability, and ease of implementation on platforms like Google Earth Engine and OpenEO. Results: The resulting framework is anticipated to produce high-resolution GPP maps tailored to different PFTs, including spatial uncertainty estimates to ensure model transparency. Validation using ICOS flux tower measurements will assess the framework's reliability across diverse European ecosystems. By integrating Sentinel-2 and ERA5 data, this approach is expected to deliver actionable insights into regional and ecosystem-specific carbon flux dynamics. Conclusion: This research seeks to demonstrate the feasibility of a high-resolution, adaptable GPP estimation framework, leveraging remote sensing and climate reanalysis data. The model’s integration with accessible platforms aims to support ecological monitoring, carbon cycle research, and informed climate policy decision-making, enabling broad adoption by researchers and stakeholders.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Wetland and anthropogenic emissions methane and carbon dioxide: Results and lessons learned from the MAGIC international campaigns and plans for future deployment in Brazil

Authors: Cyril Crevoisier, Caroline Bès, Jérôme Pernin, Axel Guedj, Thomas Ponthieu, Félix Langot, Lilian Joly, Nicolas Dumélié, Bruno Grouiez, Thomas Lauvaux, Charbel Abdallah, Yao Té, Pascal Jeseck, Michel Ramonet, Julien Moyé, Morgan Lopez, Hervé Herbin, Valéry Catoire, Nicolas Cézard, Julien Lahyani, Andreas Fix, Matthieu Quatrevalet, Anke Roiger, Klaus-Dirk Gottschaldt, Alina Fiehn, Rigel Kivi, Stéphane Louvel, Frédéric Thoumieux, Aurélien Bourdon
Affiliations: CNRS/LMD, CNES, GSMA/URCA, MONARIS/SU, LSCE/IPSL, LOA, LPC2E, ONERA, DLR, FMI, SAFIRE
Iin August 2021, a large-scale international campaign called MAGIC2021 took place in Scandinavia, with two objectives: 1) to improve knowledge of wetland emissions of methane; 2) to validate satellite missions measuring greenhouse gases (TROPOMI/Sentinel-5P, OCO-2, IASI) in the circumpolar region. Lead by CNRS and CNES, the campaign gathered 70 scientists from 14 research teams. During two weeks, more than twenty instruments were deployed on 3 research aircrafts, several small and large stratospheric balloons, as well as on ground. They covered a large region extending from Abisko Lake (Sweden) to Sodenkylä (Finland) and combined in-situ (air samplers, CRDS analyzers) and remote sensing observation (lidars, spectrometers) of atmospheric concentration of methane. It was followed by 2 consecutive campaigns, MAGIC2022 and MAGIC2023, in the mid-size city of Reims in France with the aim of evaluating anthropogenic emissions of CO2 and CH4 from the city and surrounding industries, including sugar factories. Both campaigns gathered about 50 scientists and the deployement of twenty instruments onboard 3 research aircrafts, small balloons and ground-based. Specific measurements were performed above the city to map the 3D atmospheric concentration of both gases at 1km-horizontal and 500m-vertical resolutions in order to prepare for the use of satellite city-mode observation such as planned for MicroCarb or CO2M. In this talk, we will present the main results derived from these unique datasets of gas concentration vertical profiles (from the ground to the mid-stratosphere), weighted columns (from the ground or from aircrafts) and 2D coverage at several altitudes of emission hotspots gathered during the MAGIC2021-2023 campaigns. We will show evaluation of several wetland and anthropogenic emissions inventories. We will also bring evidence for the crucial need for a better understanding of the vertical distribution of methane concentration, and the benefit of combining observation from short-wave (TROPOMI/Sentinel-5P) and thermal infrared (IASI/Metop) to account for long-scale horizontal transport (e.g. fire emissions). Finally, we will present some lessons learned from the campaigns and introduce future plans for a large scale campaign focusing on tropical wetlands (Brazil) in summer 2026. Main funding for the MAGIC campaigns came from CNES, CNRS, ESA, EUMETSAT and DLR (https://magic.aeris-data.fr).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: An optimized Land Parameter Retrieval Model parametrization for improved vegetation optical depth estimates

Authors: Univ.Ass.in Dipl.-Ing.in Ruxandra-Maria Zotta, Richard de Jeu, Nicolas Francois Bader, Wolfgang Preimesberger, Dr. Thomas Frederikse, Wouter Dorigo
Affiliations: TU Wien, Transmissivity B.V., Planet Labs
Monitoring long-term vegetation dynamics is crucial for many environmental studies, including carbon cycle modelling. Therefore, there is a need to enhance the quality of spaceborne vegetation estimates continuously. Vegetation optical depth (VOD) is a radiative transfer model (RTM) parameter retrieved from brightness temperature (TB) measurements and is closely related to vegetation water content and biomass. VOD observations have been used extensively in applications related to the carbon cycle, including estimating gross primary productivity and monitoring above-ground biomass. The Land Parameter Retrieval Model (LPRM) is a well-known RTM which simultaneously retrieves soil moisture and VOD using the microwave polarization difference index in a radiative transfer equation. LPRM is a forward model that runs the RTM iteratively over a wide range of soil moisture and VOD scenarios, modelling TB. Soil moisture derived through LPRM is used in the European Space Agency (ESA) Climate Change Initiative (CCI) Soil Moisture project framework, which produces global consistent, long-term, multi-sensor time-series of satellite soil moisture data. Several calibrations of the LPRM have been carried out throughout the project to improve the quality of soil moisture estimates and to facilitate better sensor harmonization. Nonetheless, the implications of different parametrizations on VOD retrievals have yet to be fully explored, an oversight this study strives to address. Here, we use TB from the Advanced Microwave Scanning Radiometer 2 (AMSR2) to investigate if calibrating the model parameters can improve the VOD retrieved through LPRM. We focus on three critical model parameters: single scattering albedo, surface roughness and effective temperature. We simultaneously optimize these parameters based on minimizing the errors between the time series of simulated and observed TB at each location. Additionally, we perform a sensitivity analysis using the Sobol method to disentangle the impact of these parameters at each location. The VOD estimates resulting from the optimization scenarios are assessed against independent vegetation datasets, such as MODIS fAPAR and against alternative AMSR2 VOD datasets, such as the Land Parameter Data Record (LPDR) and AMSR2-IB. Our findings indicate that the optimization procedures significantly improve the VOD, particularly in deciduous, needle-leaf, and mixed forest areas. These advancements render the new retrievals more applicable for carbon cycle research.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: 3D-Biomass: Biomass Estimation at Different Height Intervals Using Terrestrial LiDAR Scanning Data

Authors: Qian Song, Zhilin Tian, Dr. Benjamin Brede, Martin Herold, Dr. Mike Sips
Affiliations: Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences
Besides from the tree canopy height and total above-ground biomass estimation, there is an increasing interest in exploring the 3D structures of forest plots with the availability of high-detail and high-point-density Terrestrial LiDAR scanning (TLS) data. In this study, we proposed a new concept of 3D Biomass that measures the biomass of forest at different height layers. It consists four main steps: individual tree delineation, leaf points exclusion, 3D reconstruction of the tree, and 3D Biomass calculation. The TLS data were acquired over the Netherlands, Ghana and Germany in 2017, 2023 and 2019 respectively. Previous studies processed and segmented these point clouds into individual trees. The dataset consists of tree point clouds of multiple species, including both coniferous and deciduous trees. Then the individual tree point cloud were segmented into leaf points and non-leaf points using GBS (Graph-Based Leaf–Wood Separation) algorithm. This algorithm first builds a network graph on tree point cloud, then gradually extract woody points via shortest path analysis. In this way, lidar points of tree leaves and branches and trunk are separated. The leaf points are discarded from further analysis because: 1) the 3D reconstruction errors of the leaf parts are significantly higher; 2) they affect the branches modeling (two times large errors compared with modeling of branches-only points); 3) the biomass of tree leaves is usually assumed to be negligible. Thereafter we use TreeQSM (Quantitative Structure Models for Trees) tool to reconstruct the 3D structure of the delineated branch and trunk points. TreeQSM models the steam and branches as one or multiple cylinders by fitting the cylinders with the neighboring points. We used mean relative point to model distance (mR-PMD) to evaluate the 3D modeling quality. Results suggest that the model error increases with the branch order (from 22% to 220%), which might due to the sparse points of thin branches. Based on the parameters of the cylinders, we calculated the tree’s volumes of different height intervals, with which 3D biomass is derived by multiplying average woody density. In the future, we will use the proposed work pipeline to map the above-ground 3D Biomass of plots over the Netherlands and Ghana using the acquired TLS data. The vertical carbon stock patterns of different tree species will be analyzed.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhancing Agroforestry Biomass Estimation Using Multitask Learning and Structural Diversity from GEDI, ALOS PALSAR and Sentinel Data

Authors: Xi Zhu, Head of remote sensing Mila Luleva, Yaqing
Affiliations: Rabobank
Accurate biomass estimation is critical for assessing carbon sequestration and supporting sustainable agroforestry systems. In this study, we present a two-step approach for biomass estimation that integrates advanced remote sensing techniques and machine learning. First, we estimate three structural variables—GEDI-derived 95th percentile tree height (H95), canopy cover (CC), and foliage height diversity (FHD)—from Sentinel and ALOS data using a UNet-based deep learning model. The performance of this model is validated against high-resolution airborne lidar data. Second, we use these retrieved variables to model aboveground biomass (AGB) in agroforestry systems, leveraging a small number of ground truth samples. We evaluate the contribution of vertical structural diversity to biomass estimation using a simple parametric model with leave-one-out cross-validation. In the first step, our results demonstrate that multitask learning within the UNet framework outperforms single-task learning, with an accuracy improvement of 5–10% for the retrieval of the three structural variables. This highlights the benefit of leveraging shared features across tasks to improve model performance. Validation with airborne lidar data confirms the reliability of the retrieved GEDI variables, emphasising the potential of Sentinel data as a cost-effective alternative for large-scale structural mapping. In the second step, incorporating foliage height diversity into the biomass estimation model led to a 4% increase in accuracy (R2: 0.71, RMSE: 6.69 ton/ha) compared to using height and canopy cover alone (R2: 0.67, RMSE: 7.13 ton/ha). This improvement suggests that diversity metrics, likely linked to species diversity and wood density variations, play a significant role in biomass prediction. The results highlight the importance of accounting for structural complexity, particularly in heterogeneous systems like agroforestry, where a mix of species and planting densities challenges traditional estimation approaches. The implications of this study are significant for both research and practice. By demonstrating the utility of GEDI structural variables retrieved from Sentinel and ALOS data, we provide a scalable framework for biomass estimation in agroforestry systems, which are underrepresented in traditional forest inventory methods. The use of multitask learning not only enhances accuracy but also streamlines the estimation of key structural variables, reducing the need for extensive field data collection. Furthermore, the integration of structural diversity into biomass models aligns with ecological principles, emphasising the role of biodiversity in ecosystem functions such as carbon storage. This study underscores the potential of combining advanced machine learning techniques with satellite data to improve biomass estimation in complex landscapes. The approach is particularly relevant for agroforestry systems, where accurate biomass assessment is crucial for designing effective carbon projects and promoting sustainable land management. Future work could explore the integration of additional GEDI metrics and extend the approach to other land-use systems to further validate its robustness and scalability.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Retrieving long-term colored dissolved organic matter absorption coefficient and dissolved organic carbon concentrations in the Mackenzie River–Beaufort Sea using CMEMS GlobColour merged product

Authors: Maria Sanchez-Urrea, Dr. Martí Galí, Dr. Marta Umbert, Dr. Carolina Gabarró, Dr. Eva De Andres, Dr. Rafael Gonçalves-Araujo
Affiliations: Institute of Marine Sciences - Spanish National Research Council (ICM-CSIC), Universitat Politècnica De Catalunya · Barcelona Tech (UPC), Universidad Politécnica de Madrid (UPM), Technical University of Denmark (DTU)
In the rapidly changing Arctic, increasing organic carbon export from river systems is anticipated due to shifts in hydrology and permafrost thawing. These changes have profound implications for the biogeochemical cycles of coastal and shelf environments, emphasizing the need for robust monitoring of major Arctic rivers. Ocean color remote sensing has emerged as a valuable tool during the ice-free season, particularly for remote and undersampled regions like the Beaufort Sea. By providing extensive spatial and temporal coverage, it bridges gaps left by sparse in situ data, enhancing our understanding of land-ocean carbon fluxes and nearshore processes. Remote sensing of Chromophoric Dissolved Organic Matter (CDOM) and Dissolved Organic Carbon (DOC) has proven effective in capturing the variability of terrestrial carbon exports. However, seasonal variations and diverse ecological characteristics across Arctic river basins present significant challenges for developing universal retrieval algorithms. As a result, region-specific approaches have been prioritized, though long-term datasets remain limited. This study introduces a 26-year satellite-derived dataset (1998–2023) quantifying CDOM absorption at 443 nm (aCDOM(443)) and DOC concentrations in the Mackenzie River–Beaufort Sea system (122–142ºW, 68–73ºN). Data were generated using the multi-sensor CMEMS GlobColour merged product and a regionally adapted GIOP (Generalized Inherent Optical Properties) algorithm. The relationship between aCDOM(443) and DOC was calibrated specifically for the study area. The approach performs well when validated against in situ observations, as evidenced by the MdAPD (r2) for aCDOM and DOC of 54.5% (0.68) and 32.4% (0.65), respectively. This dataset enables detailed investigations of plume dynamics, interannual variability, and long-term trends. Comparisons with independent in situ DOC records (1999–2017) revealed consistent variability patterns. Interestingly, contrary to initial assumptions, both aCDOM(443) and DOC exhibited a significant decline at the Mackenzie River mouth (−0.017 m⁻¹ yr⁻¹ and −3.40 M yr⁻¹, respectively) over the 26-year period. These trends align with observed decreases in river discharge, suggesting a potential link between hydrological changes and the declining export of terrestrial organic carbon in the region. This finding highlights the need for further examination of interannual variability and the uncertainties in satellite-derived carbon metrics.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Mapping and Measuring Methane Release From Boreal Peatlands and Swamps: Testing the Capability of a Ground-Based and Airborne Long-Wave Infrared Hyperspectral Imager

Authors: Luke Richardson-Foulger, Professor Martin Wooster, Callum Middleton, Dr Mark Grosvenor, Dr. José Gómez-Dans, Dr. William Maslanka
Affiliations: National Centre for Earth Observation - King's College London, Leverhulme Wildfires Centre - King's College London
Boreal peatlands store more than 30% of Earth's terrestrial carbon, despite occupying only 3% of its surface. Changing rainfall patterns, increasing global temperature and poor human management have resulted in drier ecosystems. Peatlands are the largest natural sources of methane, a potent greenhouse gas, which could be further exacerbated by drying conditions. This threatens to turn peatlands from a net carbon sink to a carbon source, which could accelerate climate change in the coming century. In 2021, the EU and US launched the Global Methane Pledge to reduce methane emissions by 30% between 2020 and 2030. Understanding the nature and severity of methane release from peatlands is thus a critical goal for greenhouse gas monitoring, aligning with international climate commitments. In this context, an ESA-funded campaign to assess the feasibility of detecting and measuring methane and nitrous oxide release from emission targets was conducted in Alberta, Canada in 2024. A long-wave FTIR-based hyperspectral imager was deployed on-site at various wetland and industrial sites across the province. The objective was to refine the use of FTIR imaging to detect low-level methane emissions with maximum sensitivity. Preliminary findings indicate that methane release from peatlands can be detected with reasonable spectral fidelity under specific meteorological conditions and with careful scene composition. This approach enables the mapping of methane emissions across peatlands and swamps from the ground, which will be compared to point-source measurements through flux chamber deployment. In addition to ground measurements, a series of survey flights were conducted equipped with a long-wave FTIR imager, a VNIR-SWIR hyperspectral imager, and atmospheric sampling instrumentation. The flights were conducted over wetlands and industrial sites. The objective was similarly to test the capability of such instrumentation in detecting and measuring methane signals. Early results indicate the difficulty in translating local measurements of emissions from the ground to wide-area surveys from an aircraft. The presentation will summarise the campaign, methodological approaches for the ground measurements and airborne surveys, and the core results. The efficacy of each approach and sensor will be discussed, including the nuances of measurement conditions and sensor adjustment. It will conclude with reflections on whether this approach could help improve ecosystem monitoring and support global methane emission reduction efforts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Sentinel-3 OLCI and SLSTR Surface Reflectance Product of the Copernicus Land Monitoring Service

Authors: Carolien Toté, Dominique De Munck, Dominique Jolivet, Jorge Sánchez-Zapero, Fernando Camacho, Enrique Martínez-Sánchez, Sarah Gebruers, Else Swinnen, Davy Wolfs, Bart Ooms, Roselyne Lacaze, Michal Moroz
Affiliations: VITO, HYGEOS, EOLAB
The Copernicus Land Monitoring Service (CLMS) produces a series of qualified bio-geophysical products on the status and evolution of the land surface. The products are used to monitor vegetation, crops, water cycle, energy budget and terrestrial cryosphere. Production and delivery take place in a timely manner and are complemented by the constitution of long-term time series. The CLMS portfolio contains several near-real time global “Vegetation” biophysical products based upon the data acquired by the Ocean and Land Colour Instrument (OLCI) and the Sea and Land Surface Temperature Radiometer (SLSTR) onboard the Sentinel-3 (S3) platforms. All processing lines generating biophysical products from S3 OLCI and SLSTR data rely on a common, harmonized pre-processing chain which ingests Top-Of-Atmosphere (TOA) Level-1B (L1B) radiance data and delivers synergetic Top-Of-Canopy (TOC) reflectances. This synergy pre-processing chain is made of various modules which perform the pixel classification, co-registration between OLCI and nadir SLSTR acquisitions, reprojection and resampling on a regular grid, and atmospheric correction. With the objective to ingest the reprocessed OLCI L1B Collection 4, a reprocessing of the entire CLMS OLCI and SLSTR TOC reflectance products has been performed. The resulting files contain TOC reflectance estimates and associated error for 15 OLCI and 5 SLSTR spectral bands, observation and illumination angles for both sensors, and 4 annotation flag layers. The CLMS Sentinel-3 TOC reflectance v2.3 products will cover the period from June 2018 up to near-real time. They will be available to users via the CLMS Dataset catalogue and the Copernicus Data Space Ecosystem from January 2025. Validation of the CLMS TOC reflectance v2.3 product is ongoing. Preliminary validation results, based on product completeness and spatial consistency analysis, product intercomparison and direct validation with in-situ data indicate the high quality of the product. Intercomparison with the ESA S3 SYN surface directional reflectance product shows good spatial and statistical consistency. Remarkably good accuracy is found with 4 RadCalNet sites, with bias less than 1% for most channels. Also, particularly good consistency is obtained between S3A and S3B as well as between OLCI and SLSTR equivalent channels. The presentation will encompass the details of the CLMS Sentinel-3 TOC reflectance v2.3 processing scheme and algorithms and final validation results.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Upscaling Photosynthetic Function from Leaf to Canopy Level and Across the Seasons

Authors: Professor Petya Campbell, Professor Fred Huemmrich, Dr. Shawn Serbin, Dr. Christopher Neigh, Christiaan van der Tol
Affiliations: University of Maryland, Baltimore County (UMBC), 1000 Hilltop Circle, NASA Goddars Space Flight Center (GSFC), Biospheric Sciences Laboratory, University of Twente, Langezijds 1102
The study of vegetation function is increasingly used to understand the influence of environmental conditions on the capacity of species to adapt and to compare their resilience to climate change. Photosynthesis is of key importance for vegetation function and while canopy chlorophyll content (Chl) informs on the potential for photosynthetic function, solar-induced Chl fluorescence (SIF) can offer a direct probe to assess actual photosynthetic activity at leaf, canopy, and regional scales. High spectral resolution data offers an efficient tool for evaluation of the ability of vegetation to sequester carbon due to changes in vegetation chemical and structural composition. While the traits used as primary indicators of vegetation function vary, all ecosystems are considered as 'dynamic entities that interact continuously with their environment', which therefore require continuous monitoring. Current technology enables the assembly of high frequency time series of diurnal and seasonal measurements of leaf photosynthetic efficiency and canopy reflectance, SIF, leaf area index and photosynthetic pigments. To address the need for continuous remote sensing of vegetation photosynthesis, at select flux tower sites we collected high frequency leaf and canopy data for crops, prairies, tundra and boreal forests. This study presents findings from the analysis of leaf-level active fluorescence metrics of photosynthetic efficiency (e.g., Electron Transport Rate, ETR; Yield to Photosystem II, Moni-PAM; Non-photochemical Quenching, NPQ), canopy reflectance and SIF collected with field Fluorescence Box (FLoX, JB Hyperspectral) and reflectance time series of images collected with the Airborne Visible / Infrared Imaging Spectrometer (AVIRIS), the space-borne Earth Surface Mineral Dust Source Investigation (EMIT), the DLR Earth Sensing Imaging Spectrometer (DESIS) and PRISMA (ASI). The data were collected at different temporal, spectral and spatial resolutions. Field and space borne reflectance corresponding by acquisition date and time were assembled and the variations in reflectance properties for the flux tower footprints were evaluated. The leaf and canopy time series, in conjunction with eddy covariance measurements of gross primary productivity (GPP), airborne and spaceborne reflectance images, were used to spatially upscale photosynthetic performance. The Soil Canopy Observation of Photochemistry and Energy fluxes model (SCOPE) was used to integrate these measurements to link reflectance to plant photosynthesis and SIF. Using proximal SIF time series we derived estimates of leaf and canopy photosynthetic pigments and efficiency (e.g., leaf electron transport rate, ETR) upscaling them across the seasons. Using SIF B proximal measurements showed superior results for upscaling leaf ETR to canopy level, as compared to the use of SIF B, SIF A+B and SIF A/B. The seasonal air- and space-borne and dense proximal reflectance data sets correspond reasonably, and the combined dataset captured the dynamics in canopy photosynthetic traits associated with phenology. Our preliminary results show the importance of using dense hyperspectral time series for monitoring the seasonal dynamics in vegetation function. Combining the proximal FLoX and space-borne reflectance data and estimates of vegetation traits and GPP demonstrates the feasibility of a multi-sensor approach upscaling from field to satellite level canopy reflectance and traits. Using the biophysical model SCOPE we obtained estimates of canopy chlorophyll (Cab), water content (Cw), leaf area index (LAI), GPP and others. We compared the use of SIF, Vegetation Indices (VIs) and Machine Learning (ML) semi-empirical models to the use of the Soil Canopy Observation Photosynthesis Energy (SCOPE) biophysical model to estimate canopy traits and GPP. The estimates of photosynthetic pigments, LAI and GPP were more accurate with lower RMSE and higher R2 when using VSWIR reflectance versus VNIR data, which is due to the higher sensitivity of the VSWIR data. The variation in GPP and the associated canopy traits increased with the advancement of senescence during the fall season. The study characterized the dynamics in canopy photosynthetic function, as measured at leaf, proximal canopy, and satellite levels; and developed innovative algorithms for estimation of GPP. We simulated photosynthetic efficiency and canopy traits, as anticipated from the forthcoming European Space Agency's Fluorescence Explorer (ESA/FLEX) and the National Aeronautics and Space Administration’s Surface Biology and Geology (NASA/SBG) missions. The constellation of forthcoming spectroscopy missions, such as FLEX, SBG and CHIME hold great potential to develop multi-sensor time-series that capture vegetation dynamics numerous times per season and enable trait comparisons across multiple seasons and years.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using GNSS VOD to Advance the Development of a Sub-Daily SAR Mission for Vegetation Water, Carbon, and Health

Authors: Nathan Van der Borght, Anna Neyer, Prof.dr.ir. Susan Steele-Dunne, Rob Mackenzie, Dr. Hans van der Marel, Paco Frantzen
Affiliations: Joint first authorship, TU Delft, TU Delft
The resilience of terrestrial ecosystems to droughts and heat stress is key for the future of the terrestrial carbon balance. Satellite observations of sub-daily variations in vegetation water content (VWC) could provide us information on health, stress and resilience of key ecosystems across the globe. Water dynamics in vegetation are central in these ecosystems, as they are closely coupled to carbon assimilation at the plant stomata. Understanding diurnal variations in VWC provides insight into the water status, stress and health of plants. However, sub-daily water dynamics in ecosystems are still poorly understood and weakly represented in terrestrial biosphere models. Furthermore, there are no existing or planned satellite missions capable of resolving fluctuations in VWC on sub-daily scales. To address this critical knowledge and observation gap, SLAINTE was developed as one of ESA’s New Earth Observation Mission Ideas with a first mission concept submitted in response to ESA’s 12th Call for Earth Explorers (Steele-Dunne et al., 2024, Matar et al., 2024). It comprises a constellation of identical , decametric, monostatic SARs to capture sub-daily variations in vegetation water storage (e.g. via vegetation optical depth (VOD), vegetation water content (VWC) and/or plant water potential (PWP) and surface soil moisture (SSM)). One of the challenges we face during the development of SLAINTE is a lack of sub-daily radar or vegetation optical depth (VOD) data. We urgently need to quantify the expected range and dynamics of radar backscatter and VOD across a range of vegetation types to support the development of the mission concept. These data are also essential to address the challenge of distentangling pertinent signals and isolating them from the influence of confounding factors that can become increasingly relevant and inter- connected at sub-daily scales. Recent studies have shown that relatively low-cost GNSS (Global Navigation Satellite System) receivers, can be used to estimate L-band VOD in-situ (e.g. Ghosh et al., 2024; Guerriero et al., 2020; Humphrey & Frankenberg, 2023; Zribi et al., 2017). This method compares the signal to noise ratio (SNR) at two receivers, one above and one below the vegetation canopy. The difference in SNR between the two receivers can be related to the opacity of the vegetation layer in between. Due to consistent data coverage given by the large number of GNSS satellites in orbit, this setup enables us to capture sub-daily VOD variations. To support the development of SLAINTE, we will install these GNSS VOD sensors at a network of sites spanning a range of vegetation types and climate classes. Continuous observations from GNSS VOD will be used to characterize sub-daily VOD dynamics and to reconcile them with co-located observations of biogeophysical variables. In addition, they will be used to support radiative transfer modeling studies to demonstrate prototype forward models and retrieval approaches. In this presentation, we will use the first year of data to demonstrate the value of this GNSS VOD network to further strengthen the science case, and consolidate the observation and measurement requirements for the SLAINTE mission idea. References: Ghosh, A., Farhad, M. M., Boyd, D., & Kurum, M. (2024). A UGV-based forest vegetation optical depth mapping using GNSS signals. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 17, 5093–5105. https://doi.org/10.1109/JSTARS.2024.3365798 Guerriero, L., Martin, F., Mollfulleda, A., Paloscia, S., Pierdicca, N., Santi, E., & Floury, N. (2020). Ground-based remote sensing of forests exploiting GNSS signals. IEEE Transactions on Geoscience and Remote Sensing, 1(1), 1–17. https://doi.org/10.1109/TGRS.2020.2976899 Humphrey, V., & Frankenberg, C. (2023). Continuous ground monitoring of vegetation optical depth and water content with GPS signals. Biogeosciences, 20(1), 1789–1811. https://doi.org/10.5194/bg-20-1789-2023 Matar, J., Sanjuan-Ferrer, M. J., Rodriguez-Cassola M., Steele-Dunne, S. & De Zan, F. (2024). A Concept for an Interferometric SAR Mission with Sub-daily Revisit. EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, pp. 18-22. IEEE, 2024. Steele-Dunne, S., Basto, A., De Zan, F., Dorigo, W., Lhermitte, S., Massari, C., Matar J. et al. (2024) SLAINTE: A SAR mission concept for sub-daily microwave remote sensing of vegetation. EUSAR 2024; 15th European Conference on Synthetic Aperture Radar, pp. 870-872. VDE, 2024. Zribi, M., Motte, E., Fanise, P., & Zouaoui, W. (2017). Low-cost GPS receivers for the monitoring of sunflower cover dynamics. Journal of Sensors, 2017(1), Article 6941739. https://doi.org/10.1155/2017/6941739
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Scale influences on plant primary productivity as estimated with satellite-driven light-use efficiency models

Authors: Egor Prikaziuk, Dr. Rebecca Varney, Dr Nina Raolut, Stephen Sitch, Linda Gotsmy, Dr. Sarah Matej, Dr. Karl-Heinz Erb, Dr. Tim Ng, Vincent Verelst, Jeroen Dries, Roel van Hoelst, Marie Polanska, Pavel Vlach, dr. Michael Schlund, Michael Marshall
Affiliations: ITC, University of Twente, University of Exeter, BOKU, University of Natural Resources and Life Sciences, VITO, Flemish Institute for Technological Research, Gisat
There is a growing need to align satellite image data to information requirements in the race for ever-increasing spatial, temporal and spectral resolution. Naturally, alignment depends on the objective of the study or application. In relation to the carbon cycle, light use efficiency (LUE) models that transform the fraction of absorbed photosynthetically active radiation (fAPAR) into grams of assimilated carbon have long required only two spectral bands, the red and the near-infrared acquired over multiple days. Being optimized for quick global gross primary productivity (GPP) mapping, the LUE models may result in bias when applied to images of higher (<= 20 m) spatial resolution. This study aimed to quantify the uncertainty of GPP introduced by the spatial scale of underlying fAPAR data. The study was performed in the scope of the European Space Agency's (ESA's) Land Use Intensity’s Potential, Vulnerability and Resilience for Sustainable Agriculture in Africa (LUISA) project. LUISA aims to quantify human pressure on the ecosystems as the difference between potential and actual net primary productivity (NPP); a key component of the so-called human appropriation of NPP (HANPP) framework. The NPP is computed with JULES (Joint UK Land Environment Simulator) Dynamic Global Vegetation Model (DGVM), parametrized with ESA CCI Land Cover and leaf area index (LAI) variables. However, JULES resolution is 0.5 deg (~50 km), which inevitably results in mixed pixels consisting of several land cover types and different seasonal LAI trends. To assess the representativeness of JULES simulations, NPP for four case study regions in Ethiopia, Mozambique, Senegal and Uganda and 14 eddy-covariance sites across Africa was computed with the PEMOC model, a big leaf LUE model, at 20 m resolution with Sentinel-2 image data. Through the steps of degrading resolution from 10 m to 50 km, the profiles of area homogeneity and the NPP uncertainty were characterized. Preliminary results computed on a single image show that in an idealized case of scale-invariant PEMOC parameterization, the mean bias error (MBE) reaches 2% (relative to the range of NPP values) and the mean absolute error (MAE) reaches 3% (relative to the range of NPP values) at 500 m. The temporal evolution of the error and its influence on accumulated NPP (biomass) will be discussed. The results of the NPP product comparison may suggest novel strategies for high-resolution EO data integration into DGVMs.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Novel Observation Operator for Assimilating Microwave Vegetation Optical Depth into Vegetation / Carbon Cycle Models

Authors: Wolfgang Knorr, Mathew Williams, Tea Thum, Thomas Kaminski, Michael Voßbeck, Marko Scholze, Tristan Quaife, T. Luke Smallmann, Susan Steele-Dunne, Mariette Vreugdenhil, Tim Green, Sönke Zaehle, Dr. Mika Aurela, Alexandre Bouvet, Emanuel Bueechi, Wouter Dorigo, Tarek S. El-Madany, Mirco Magliavacca, Marika Honkanen, Yann H. Kerr, Anna Kontu, Juha Lemmetyinen, Hannakaisa Lindqvist, Arnaud Mialon, Tuuli Miinalainen, Gaétan Pique, Amanda Ojasalo, Nemesio J. Rodríguez-Fernández, Mike Schwank, Peter J. Rayner, Pablo Reyez-Muñoz, Dr. Jochem Verrelst, Songyan Zhu, Shaun Quegan, Dirk Schüttemeyer, Matthias
Affiliations: The Inversion Lab
Recent reports about a possible weakening of the terrestrial biosphere's carbon sink have highlighted the importance of land vegetation for storing large quantities of carbon originating in CO2 emitted by human activities, and thus mitigating against some of the worst impacts of the enhanced greenhouse effect. However, we still have only a very limited knowledge about the spatial and temporal dynamics of terrestrial-biospheric carbon pools. In such a situation, both passive and active microwave missions offer the unique opportunity to monitor above-ground land carbon and biomass repeatedly and undisturbed by cloud cover. For such sensors, various algorithms are available to separate the contribution of terrestrial vegetation to the microwave signal from that of the underlying soil. This contribution is usually expressed as Vegetation Optical Depth (VOD) for the specific wavelength used by the sensor. So far, the most common approach has been to use empirically derived relationships between VOD and above-ground biomass (AGB) to monitor carbon stores in land vegetation. This approach, however, ignores the influence of the plants' hydraulic status on VOD, nor does it take into account temperature effects. Therefore, if we employ a terrestrial biosphere model to simulate AGB and from that predict measured VOD using this approach, we fail to capture the VOD signal's often pronounced temporal fluctuations at the time scale of days and weeks. Here, we present a semi-empirical model for VOD of varying wavelength that can be easily implemented as an observation operator with regional to global terrestrial biosphere models. It predicts VOD using stem and leaf biomass, soil moisture and transpiration rates as input. We present results using this new VOD observation operator together with the D&B terrestrial vegetation model. D&B simulates carbon fluxes at land surfaces embedded into the full energy, water and radiation balance. Carbon is allocated to various live vegetation and soil organic matter pools. We show simulation results compared to locally measured VOD of different wavelengths from Sodankylä, a boreal study sight in northern Finland. The model captures the main features of the temporal variations of measured VOD, despite the fact that there is little change in biomass density during the measurement campaign. However, viewing conditions were changed twice, so that different stem densities fell within the instrument's field-of-view. This had a profound impact on measured VOD, which can also be reproduced by the model if we account for changes in the stem biomass input to the VOD model. As the next step, we compare simulated L-band VOD (L-VOD) for a regional simulation to measurements from ESA's SMOS mission. The results are similar to those obtained from local measurements. We finally show how assimilation of SMOS L-band VOD, surface soil moisture, and regional AGB data into D&B can be used to constrain the parameters of a L-VOD observation operator. This opens up the possibility to assimilate SMOS L-band VOD and surface soil moisture to derive estimates of AGB and other properties of the land surface for further regions. Our results highlight the potential of VOD for monitoring various land surface properties related to the carbon cycle within the framework of terrestrial-biosphere data assimilation. The simple, semi-empirical VOD model is ideally suited to be coupled to regional to global-scale vegetation models, as it does not depend on detailed structural or hydraulic properties of vegetation, for which information is rarely available at the spatial scales of interest for such models. It thus offers a valuable alternative to detailed models of the microwave backscatter, and at the same time constitutes an important advancement beyond empirical AGB-VOD relationships.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Using remotely sensed ecological and climate variables to assess ecosystem productivity for land carbon sequestration studies

Authors: Yanli You, Distinguished Professor Alfredo Huete
Affiliations: University Of Technology Sydney
Satellite derived ecological indicators such as vegetation indices (VI), leaf area index (LAI), and solar-induced chlorophyll fluorescence (SIF) are widely used to monitor ecosystem dynamics at different temporal and spatial scales. These indicators may be relevant in assessing changes in vegetation carbon sequestration capacity, assessing carbon source periods, and evaluating carbon storage across different ecosystems. Although much remote sensing carbon research is focused on carbon fixed as photosynthesis, there is less attention on the actual carbon stocks remaining after ecosystem respiration. In this study, we evaluated and compared the relationships between remote sensing derived ecological variables including enhanced vegetation index (EVI), LAI and SIF with eddy covariance measured gross primary productivity (GPP) and net ecosystem productivity (NEP) across a range of flux tower sites along the North Australian Tropical Transect (NATT). We also assessed three satellite-derived climate indicators, daytime and nighttime land surface temperature (LST), soil moisture (SM) and rainfall across our study sites. Six hydrologic years were analysed at both monthly and annual scales from Sep. 2015 to Aug. 2021. We aimed to assess how well the various satellite products can represent and detect changes in NEP, relative to their more common use in quantifying GPP. The results show site-dependent, strong NEP-GPP relationships (r^2 ~0.7 to 0.8, monthly scale) with slope sensitivities of NEP to GPP ranging from 0.35 to 0.46 (monthly scales) and 0.32 to 0.67 (annual scales). The higher values were inversely related with latitude indicating greater carbon sequestration rates in the wetter northern sites of NATT. In most cases ecological remote sensing variables were more strongly related to GPP than NEP and could not readily be used to identify carbon source periods. However, NEP relationships with LST, SM and rainfall helped explained ecosystem carbon sink to source switches and provided useful information for carbon sequestration studies at ecosystem scales.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: CAMAP and MAMAP-2D – Methane and CO2 airborne imaging spectrometers for validation of current and future GHG satellite missions

Authors: Konstantin Gerilowski, Jakob Borchardt, Sven Krautwurst, Oke Huhs, Wilke Thomssen, John Philip Burrows, Heinrich Bovensmann, Hartmut Bösch, Marvin Richrath, Jan Franke, Jan-Hendrik Ohlendorf, Roman Windpassinger, Yasjka Meijer, Thorsten Fehr
Affiliations: Institute of Environmental Physics (IUP), University of Bremen, Institute for Integrated Product Development (BIK), University of Bremen, European Space Agency (ESA)
To measure and monitor the greenhouse gas emissions from space on different spatial scales, several satellite missions are currently under development or have been launched in recent years. These missions comprise US missions like Methane-Sat and Tanager-1&2 or European missions like Sentinel-5p, MicroCarb, TANGO and the upcoming Copernicus Sentinel-5 and CO2M missions, aiming to provide high spatial resolution/medium spectral resolution or medium spatial resolution/high spectral resolution data with sufficient spatial coverage, and using different NIR and SWIR spectral bands and windows of the Methane and CO2 absorption spectrum (between 1.59µm and 2.3µm) for data retrieval. Accompanying and supplementing the space missions, high-quality atmospheric imaging spectra of CO2 and CH4 acquired from aircraft are needed to support retrieval algorithms development, and contribute to the validation and interpretation of Level 1, Level 2 and emission data products. In response to this need, the MAMAP-2D (Methane Airborne MAPper -2D) and CAMAP (CO2 And Methane Airborne maPper) high-performance airborne imaging spectrometers are being developed by IUP Bremen under national and ESA contracts, replicating as close as possible the spectral bands and spectral resolution defined for CO2M. CAMAP and MAMAP-2D are designed to acquire spectral images from aircraft in push-broom mode, allowing to retrieve atmospheric CO2 and CH4 concentration maps. At a flight altitude of 8 km, the instruments can cover a nadir swath of ~3.5 km with a ground sampling distance of ~ 100m x 100m. The instruments use NIR (760nm), SWIR-1 (1.6µm), and SWIR-2 (2µm, CAMAP only) spectral bands, with each band being implemented as an individual grating spectrometer. Design and implementation exploit a modular approach, using as many as possible similar or the same components with respect to opto-mechanical design geometry and accommodation. The MAMAP-2D two channel NIR and SWIR-1 instrument is currently being assembled, tested, and planned to be flown in 2025. For CAMAP, the already developed NIR and SWIR-1 bands will be complemented by an additional new SWIR-2 band. CAMAP is scheduled to being ready for calibration in the second half of 2026. This presentation describes specifications, design concept, and expected performance of the instruments, and provides an overview over the development status.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Measuring biomass in agroforestry systems coupling ground measurements, drone measurements and very high resolution stereo satellite images

Authors: Gernot Ruecker, Dirk Kloss, Oreste Santoni Akossi, Philippe Koumassi, Antoine Servais, Mathurin Koffi, Stefanie Korswagen, Kathrin Damian, Florent Camera
Affiliations: ZEBRIS Geo-IT GmbH, HEAT GmbH, Ministry of Environment and for the Ecological Transformation, GFA Group GmbH, GIZ GmbH
Agroforestry has a large potential to increase resilience of smallholder farmers to climate change in Sub-Saharan Africa and at the same time enhance terrestrial carbon stocks. Within the framework of the Paris Agreement on Climate Change and with the development of carbon markets, opportunities arise to generate additional income from increased carbon storage on agricultural land through agroforestry. This can be achieved through the generation of Internationally Transferable Mitigation Outcomes (ITMOS) in the case of the Paris Agreement or through voluntary carbon market projects. Such mitigation outcomes need to be transparently verifiable. This is achieved through establishment of Monitoring, Reporting and Verification (MRV) systems that document the mitigation outcomes, the used measurement methods and the underlying data. The necessary data to drive such a system can be obtained from ground measurements and remotely sensed observations. The rapidly emerging use of drones in Sub-Saharan Africa makes the use of high resolution airborne data as a middle layer between very detailed and costly ground measurements and less accurate satellite measurements attractive. Here we present a MRV system developed and applied to two agroforestry systems in Côte d’Ivoire. The system is based on three levels of data collection: on the ground, through drones and using very high resolution stereo satellite images. An option for using open source radar (ALOS) and optical data (Sentinel 2) is also discussed. The system was developed in collaboration between Ivorian institutions at local and national level and international cooperation partners. It was initially developed to assess carbon storage in agroforestry parcels in Northern Côte d’Ivoire. In these agroforestry systems, short rotation Acacia trees were planted. These leguminous trees are able to fix atmospheric nitrogen and help improve the soil quality. A resulting shorter fallow cycle can reduce pressure on neighboring forests and the leguminous trees help to avoid soil degradation. The MRV system is currently being adapted for a different land use in Southern Côte d’Ivoire, where carbon stock enhancements through shaded cacao agroforestry systems and restoration of community forests and degraded riverbanks are measured. Carbon stored in soil, litter, dead wood, trees planted during the agroforestry activity and other (pre-existing) trees is assessed using standard inventory techniques. Allometric equations are used to estimate tree biomass. In the case of the young Acacia trees, specific local allometric equations are developed. Commercial drones and structure-from-motion techniques are used to obtain a three dimensional model of the plots. These three dimensional models are used to obtain a surface- and a terrain model, from which a crown height model (CHM) is derived. Individual tree heights are obtained by automatically identifying trees in the CHM using local maximum filtering. Tree heights and recorded planting densities are then used to derive above ground biomass through allometric equations. High resolution stereo satellite data from the Skysat constellation (operated by Planet, Inc.) are used to obtain a digital surface model at 80 cm resolution. Thin plate spline interpolation is used to derive a digital terrain model by automatically identifying ground points in the surface model and interpolating the terrain between the obtained ground points. The CHM and recorded planting densities are then used to estimate biomass of the plantations. Correlation between drone obtained data and ground sampled data is satisfactory (r2 = 0.75). There is potential for improvement through a better filtering of vegetation which is sometimes misclassified as soil, leading to biases in AGB estimates. Correlation between satellite derived data and ground sampled data is weaker (r² = 0.53). Also here, improvement of the algorithm for identifying ground data in the satellite images could further improve the results. Another limitation is that most plantation trees are very young and thus tree height is sometimes close to measurement accuracy, and that some young trees are mistaken for ground. Adoption and enhancements of the system for shaded cacao plantations are the use multispectral drone data for automated discrimination between plantation trees and other trees, as well as machine learning and object recognition techniques to improve mapping of individual trees and small tree clusters, and better non-tree (ground) point classification for the derivation of digital terrain models. For data storage and analyses, a web-based system was set up that supports local ground sampling using a mobile app where data are directly capture in the field using tablets. Analyzed carbon stock data are then stored in a centralized database that can be used for reporting and verification purposes through a Web-GIS interface. The system is based on various Open Source components and can be easily adapted and configured for different agroforestry types.
Add to Google Calendar

Tuesday 24 June 17:45 - 18:30 (Frontiers Agora)

Session: C.05.10 EO National Missions Implemented by ESA - Future Evolution

The session will be used to provide example of capabilities developed by the national missions under implementation at ESA. Furthermore, it will give the opportunity to explore potential cooperations, challenges and further developments ahead.

Speakers:


  • S Lokas – ESA
  • Konstantinos Karantzalos – Secretary General, Greek Ministry of Digital Governance and Greek Delegate to the ESA Council
  • Dimitris Bliziotis – Hellenic Space Centre and Greek delegate to PBEO
  • G. Costa – ESA
  • F. Longo – ASI
  • D Serlenga – ESA
  • Head of Delegation to ESA – MRiT
  • R. Gurdak – POLSA
  • L. Montrone – ESA
  • N. Martin Martin / J.M. Perez Perez – (Affiliation not specified)
  • Pedro Costa – CTI
  • Betty Charalampopoulou – Geosystems Hellas CEO and BoD Hellenic Association of Space Industry
  • Dr. hab. inż. Agata Hościło – Institute of Environmental Protection – National Research Institute
  • A. Taramelli – ISPRA
  • V. Faccin – ESA
  • R. Lanari – CNR/IREA
  • M. Manunta – CNR/IREA
  • L. Sapia – ESA
  • E. Cadau – ESA
  • Rosario Quirino Iannone – ESA
  • Mario Toso – ESA
  • Enrique Garcia – ESA
  • Ana Sofia Oliveira – ESA
  • Ariane Muting – ESA
  • V. Marchese – ESA
  • Jolanta Orlińska – POLSA
  • G. Grassi – ESA
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.09.01 - POSTER - The mountain cryosphere in peril – improved monitoring of snow and ice in complex terrain to address societal challenges in the face of climate change

The impact of climate change on the cryosphere in mountain areas is increasing, affecting billions of people living in these regions and downstream communities. The latest Intergovernmental Panel on Climate Change Assessment Report highlights the importance of monitoring these changes and assessing trends for water security, as well as the risks of geo-hazards such as GLOFs, landslides, and rockfalls.

This session will explore advanced methods and tools for monitoring physical parameters of snow, glaciers, and permafrost in mountainous regions using data from current satellites. We will also discuss the potential of upcoming satellite launched in the near future to enhance these observations and fill in any gaps. By improving our understanding of water availability in mountainous areas and identifying key risks, we can develop strategies to adapt to the changing conditions and also better protect these vulnerable regions.

We welcome contributions on advanced geophysical observations of snow, glaciers and permafrost variables in mountainous regions around the world using different satellite data and their impact on water resources and the increasing risks posed by geo-hazards under changing climate conditions.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Trends in the annual snow melt-out day over the French Alps and the Pyrenees from 38 years of high resolution satellite data (1986–2023).

Authors: Zacharie Barrou Dumont, Simon Gascoin, Jordi Inglada, Andreas Dietz, Jonas Köhler, Matthieu Lafaysse, Diego Monteiro, Carlo Carmagnola, Arthur Bayle, Jean-Pierre Dedieu, Philippe Choler
Affiliations: Magellium, Center for the Study of the Biosphere from Space (CESBIO), CNES/CNRS/IRD/UT3 Paul Sabatier, German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), Grenoble Alpes university, Météo-France, CNRS, CNRM, Center for the study of snow, Grenoble Alpes university, Savoie Mont Blanc university, CNRS, Alpine Ecology Laboratory (LECA), Institut of Environmental Geosciences (IGE), Grenoble Alpes university/CNRS/Grenoble INP/ INRAE / IRD
Information on the spatial-temporal variability of the seasonal snow cover duration over long time periods is critical to study the response of mountain ecosystems to climate change. In particular, the annual snow melt out day (SMOD, i.e. the last day of snow cover) modulates the onset of the growing season and therefore has a profound impact on alpine vegetation dynamics and productivity. However, little is known about the SMOD trends at larger scale in the European mountains due to the sparse distribution of in situ observations or the lack of adequate remote sensing products as multi-decade time series of the snow cover area are typically derived from low resolution sensors such as MODIS (20 years, 500 m) or AVHRR (35 years, 1 km) which fail to capture the high spatial variability of mountain snowpack. Accounting for cloud cover, the effective revisit of the Landsat program (53 years, 30-60 m) is approximately one observation per month or less which hinders applications in mountain ecosystems. The release in the public domain by the French Space Agency (CNES) of the full collection of SPOT 1-5 images with the SPOT World Heritage (SWH) program provided a unique opportunity to densify the Landsat time series from 1986 to 2015 with thousands of 20 m resolution multi-spectral images. Therefore for this study we combined snow cover data from ten different optical platforms including SPOT 1-5, Landsat 5-8 and Sentinel-2A&B to build an unprecedented multidecadal time series of the annual SMOD at 20 m resolution across the French Alps and the Pyrenees from 1986 to 2023. The snow cover information was extracted from SWH images using deep learning and an innovative image emulation method [1]. We evaluated the pixel-wise accuracy of the computed SMOD using in situ snow measurements at 344 stations. We found that the residuals are unbiased (median error of 1 day) despite a dispersion (RMSE of 28 days), allowing us to study SMOD trends after spatial aggregation stratified by region and topographic class. The selected regions called "massifs" are relatively homogeneous with respect to their principal climatological characteristics at a given elevation, slope, and aspect. We found a general reduction in the SMOD revealing a widespread trend toward earlier disappearance of the snow cover with an average reduction of 20.4 days (5.51 days per decade) over the French Alps and of 14.9 days (4.04 days per decade) over the Pyrenees over the period 1986–2023. The SMOD reduction is robust and significant in most parts of the French Alps and can reach one month above 3000 m. The trends are less consistent and more spatially variable in the Pyrenees [2]. The historical SMOD dataset is freely available for future studies of mountain ecosystems changes, while it is extended by the Copernicus Land Monitoring Service (CLMS), which operationally produces and disseminates the Snow Phenology (SP S2) yearly product based on Sentinel-2 observations at the European scale. This work was supported by the TOP project under grant agreement ANR-20-CE32-0002. [1] Barrou Dumont, Z., Gascoin, S., and Inglada, J., 2024. Snow and Cloud Classification in Historical SPOT Images: An Image Emulation Approach for Training a Deep Learning Model Without Reference Data, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, pp. 1–13, https://doi.org/10.1109/JSTARS.2024.3361838. [2] Barrou Dumont, Z., Gascoin, S., Inglada, J., Dietz, A., Köhler, J., Lafaysse, M., Monteiro, D., Carmagnola, C., Bayle, A., Dedieu, J.-P., Hagolle, O., and Choler, P., 2024. Trends in the annual snow melt-out day over the French Alps and the Pyrenees from 38 years of high resolution satellite data (1986–2023), EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-3505.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Data assimilation of sparse snow depth observation with optimized spatial transfer of information

Authors: Marco Mazzolini, Marianne Cowherd, Kristoffer Aalstad, Manuela Girotto, Esteban Alonso-González, Désirée Treichler
Affiliations: University Of Oslo, UC Berkeley, Pyrenean Institute of Ecology
The satellite laser altimeter ICESat-2 provides accurate surface elevation observations across our living planet. With a high-resolution digital elevation model (DEM), we can use such measurements to retrieve snow depth profiles. Such observations are of great societal relevance because water managers could potentially use them to infer snow amounts even in remote montane areas, where the role of snow as water tower is not actively monitored. However, these retrievals are not currently used operationally because they are very sparse in space and time: ICESat-2 measures along profiles with a three-month repeat interval. The high spatio-temporal variability of the seasonal snowpack limits the observations’ value. Data Assimilation (DA) methods allow to use information from snow observations to constrain snow models and provide gap-free distributed simulations. The assimilation of observations like snow cover is considered the state-of-the-art for generating retrospective reanalysis, but the use of sparse snow depth observations in DA is an active research area as those could be used in an operational manner. Covariance localization has been adopted to spatially transfer information from observed locations to similar unobserved locations. Traditionally, the geographical distance has been used to define the similarity between locations. In previous studies, we showed that topographical indices and the climatology of the melt-out date are also relevant parameters for determining the similarity. However, this measure was considered as a fixed hyperparameter. In this work, we exploit airborne lidar snow depth maps acquired by the Airborne Snow Observatory (ASO) to optimize the similarity measure between simulated cells. Gaussian Processes (GP) offer a probabilistic approach to infer the relative relevance of geographic, topographic and snow-climatology variables through a method called Automatic Relevance Determination (ARD). Preliminary results from the East River basin in Colorado, USA, indicate that ARD can successfully learn repeated snow depth patterns from a water year and improve the spatial transfer of information for successive water years when only a profile is measured. The learned relative relevance is used in a set of full spatio-temporal DA experiments designed to quantify the potential contribution of snow depth observations from the satellite altimeter ICESat-2 for seasonal snow operational forecasts.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Quantifying Uncertainty in Supraglacial Lake Depth Modeling from Optical Remote Sensing Data: Insights from Greenland

Authors: Samo Rusnák, Lukáš Brodský
Affiliations: Department of Applied Geoinformatics and Cartography, Faculty of Science, Charles University
Glacier dynamics driven by climate change is a closely monitored global issue. Monitoring the presence of supraglacial lakes (SGL) and their metrics like area, depth, and volume can provide insights into meltwater dynamics and glacier stability. However, due to their remote locations, field monitoring is limited, which makes remote sensing techniques essential for large-scale and frequent SGL monitoring. Accurately estimating SGL volume and depth requires refining the physical model, which in current research neglects the effects of cryoconite on glacier surface and suspended particulate matter in water. Additionally, Model calibration relies on only a single publicly available dataset from 2010 (Tedesco et al. 2015). Addressing this limitation is essential for reducing uncertainties in depth and volume estimates from remote sensing data. The Greenland Ice Sheet was used as a case study to demonstrate the current physical model due to the significant presence of SGLs as well as an available dataset. Supervised classification for SGL detection and regression analysis of the light attenuation coefficient (the physical model's g parameter) were applied to Landsat 7 scenes corresponding to the spatial and temporal coverage of the dataset by Tedesco et al. (2015). This study is the first to analyze and quantify the lake bottom albedo's variability parameter (the physical model's Ad parameter) and its impact on SGL depth and volume estimation. The analysis revealed significant variability in the parameter Ad, resulting in SGL volume uncertainty of up to 66 % under various physical model parameterizations. For a single Landsat 7 image used in this study, the estimated SGL volume ranges from 124 million m³ to 207 million m³. This indicates the limitation of global parametrization and highlights the need for improved calibration to enhance the model's accuracy. The proposed physical model improvement is significant in providing more accurate glacier monitoring through optical remote sensing data, which is relevant for understanding climate change impacts on glacier dynamics. Reference: TEDESCO, M., STEINER, N., POPE, A. (2015): In situ spectral reflectance and depth of a supraglacial lake in Greenland, Arctic Data Center, https://doi.org/10.5065/D6FQ9TN2.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Towards the development of a hybrid satellite product for snowline and meltline estimation at the scale of mountain massifs

Authors: Karlotta Kilias, Guillaume James, Fatima Karbou, Cléménce Turbé, Adrien Mauss
Affiliations: CNRM/CEN - Météo-France - CNRS, Université Grenoble Alpes / Grenoble INP - Inria - Laboratoire Jean Kuntzmann, DirOP / Centre de Météorologie Spatiale - Météo-France
The evolution of the snowline is an essential variable for short- and long-term snow cover monitoring in mountain massifs. While remote sensing technologies such as Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 optical imagery provide great value in this field, their exploitation remains a complex issue. This study works towards the development of a comprehensive product for snowline detection at the scale of massifs, taking into account several available satellite sources for any given date. Such a product would benefit many end-users, including forecasters, who desire prompt information on the elevation of snow cover across an entire catchment or mountain range. In the frame of this work, we focus on the inclusion of Sentinel-1 and Sentinel-2 data. For each mountain range, we project the native satellite images (image ratio constructed with a multi-annual snow-free reference for Sentinel-1 and the NDSI index for Sentinel-2) into an altitude-orientation reference frame using the SRTM DEM. The orientation-altitude diagrams are then partitioned into snow-covered and snow-free zones by means of a segmentation method, which allows us to directly infer the snowline at massif scale. As it constitutes the most crucial step of the process, the choice of the segmentation method merits a particular focus. Beyond the classical threshold-based method coined by T. Nagler and H. Rott, we explore an extended version of the method of Chan-Vese, a mathematical image segmentation method based on the minimization of an energy term through curve evolution. The method of Chan-Vese notably allows for an easy incorporation of several images from different sources into the energy term, depending on their availability on a given date. As we work in the orientation-altitude system, the input images can have different resolutions. Weights are assigned to the input information to account for resolution differences and weather conditions (for instance to favor optical images under clear sky conditions, but rely more heavily on SAR information when cloud cover is high). Among the satellite data sources, Sentinel-1 SAR plays a particular role, since it reacts to the liquid water content of the snow cover. This implies that when using SAR as only source, we can additionally obtain information on the meltline, i.e. the limit between wet and dry snow, which provides valuable insights into the development of the snow cover during the melting period. In the future, the inclusion of other data sources, such as Sentinel-3 or VIIRS, will also be of interest as it allows for tighter temporal monitoring (ideally daily).
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Testing the Retrieval Capabilities of Hyperspectral and Multispectral Sensors for Snow Cover Fraction (SCF)

Authors: Riccardo Barella, Carlo Marin, Claudia Notarnicola, Claudia Ravasio, Biagio Di Mauro, Erica Matta, Dr.ssa Claudia Giardino, Umberto Morra di Cella, Roberto Garzonio, Dr. Monica Pepe, Roberto Colombo, Katayoun Fakherifard
Affiliations: Eurac Research, Università Milano Bicocca, CNR ISP, ARPA VDA, CNR IREA, Agezia Spaziale Italiana (ASI)
The Snow Cover Fraction (SCF)—defined as the percentage of a pixel's surface covered by snow—is a key metric for characterizing snow distribution, particularly in areas with patchy or discontinuous snow cover. Its relevance is heightened in complex mountainous terrains or when analyzing satellite imagery with coarse spatial resolutions. Unlike binary Snow Cover Area (SCA) classifications, SCF offers finer granularity, enabling detailed snow characterization critical for hydrological and climate studies. SCF retrieval is predominantly based on optical remote sensing in the visible and shortwave infrared regions. While traditional methods, such as regression on the Normalized Difference Snow Index (NDSI) and multispectral unmixing, are widely used for their simplicity and reasonable performance, they face significant limitations in regions with complex topography or under atmospheric disturbances. Furthermore, these approaches fail to leverage the full potential of hyperspectral data, which offers richer spectral information. This study investigates the performance of linear and non-linear spectral unmixing methods for SCF retrieval using hyperspectral PRISMA and multispectral Sentinel-2 imagery. Data were acquired over Cervinia, Italy, on 4 July 2024, during a dedicated in situ campaign. Field spectroscopy data and very high-resolution (VHR, 25 cm) RGB images from a drone were collected to generate reference SCF maps at 20 m and 30 m resolutions for validating PRISMA and Sentinel-2 estimations, respectively. SCF retrieval algorithms evaluated in this work include linear regression on NDSI, linear spectral unmixing, and non-linear spectral unmixing. Various combinations of end-member spectra, sourced both from in situ measurements and direct image extraction, were tested. Additionally, the hyperspectral capabilities of PRISMA enabled experimentation with diverse spectral band combinations and bandwidths. Preliminary results highlight that SCF below 20% is challenging to detect. Nonetheless, the rich spectral information provided by PRISMA demonstrates improved performance compared to multispectral sensors. However, geolocation accuracy significantly impacts the retrieval results, underscoring the need for precise alignment in image processing. This work represents one of the first real-world validations of SCF retrieval methods using both hyperspectral and multispectral sensors. The findings enhance our understanding of SCF detection limits and the sensitivity of retrieval algorithms, contributing to advancements in snow monitoring techniques. The insights gained may inform the design of future multispectral sensors optimized for SCF retrieval. Acknowledgements • “Research work carried out using ORIGINAL PRISMA Products - © Italian Space Agency (ASI); the Products have been delivered under an ASI License to Use” . • This work is carried out within Contract “SCIA” n. 2022-5-E.0 (CUP F53C22000400005), funded by ASI in the “PRISMA SCIENZA” program.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Designing a permafrost & climate change response system in Longyearbyen, Svalbard

Authors: Maaike F. M. Weerdesteijn, Hanne Hvidtfeldt Christiansen, Marius O. Jonassen
Affiliations: University Centre In Svalbard
The Arctic town of Longyearbyen on Svalbard is located at 79°N and is built on permafrost. Longyearbyen is situated in a narrow valley surrounded by steep mountain slopes. Svalbard is undergoing some of the strongest climatic warming recently. With rising air temperatures, the permafrost active layer thickness increases, and it stays unfrozen longer into the autumn. Thawing permafrost in Longyearbyen has two major consequences: (1) damage of the town’s modern infrastructure and cultural heritage and (2) an increase of landslide risk due to mountain slope instability. Here, we give an example for each of the thawing permafrost consequences. (1) Lacking suitable building foundation design has caused abrupt evacuation of buildings in recent years, thereby displacing people from their homes. There is minimal place for Longyearbyen to expand due to the limited unused and unprotected space between mountain slopes and the managed river flowing through the entire valley. Therefore, depreciated homes due to permafrost thaw cause a level of complexity on the already existing housing issue. (2) In October 2016, several landslides on the town’s valley sides were triggered as the entire thawed active layer detached, due to a rainstorm with 20 mm of precipitation over 24 hours coming down with 2 mm/hour intensity. A 75 mm rainstorm in November the same year led to fewer and smaller landslides, because the active layer had started to freeze in November, and rainwater could not penetrate into the ground. Summer 2024 also saw many landslides, exposing the top the of the permafrost, and rockslides on mountains next to town, where residents and tourists hike for leisure. These events call for a need to observe, monitor, and predict the increasingly dynamic landscape. We do so by ground- and satellite-based observations and feeding these into the response system. The ground-based instruments are equipped with telemetric devices that send data real-time over the mobile network. Permafrost temperature and ground water content are measured by thermistor strings in boreholes and soil moisture sensors recording through the entire active layer in profiles on both sides of town near the boreholes. For the geotechnical mountain slope stability modelling we require high resolution weather simulations that resolve the steep topography around Longyearbyen. Weather stations in and around town, of which some are co-located with the boreholes, will be used to evaluate and further develop this weather model. Other inputs for the mountain slope stability modelling are a digital elevation model (DEM) and ground characteristics, such as ground ice content, thermal properties, and grain size of the sediment: information obtained from borehole cores. Deformation maps retrieved from interferometric synthetic aperture radar (InSAR) data processing over 2016-2023 are correlated to landforms in the valley, to identify areas of concern in the wetter summer and autumn periods. The next step is to apply pattern recognition between local observations of ground and meteorological conditions with the InSAR deformation maps for more robust landslide prediction, next to the geotechnical landslide modelling. We focus on developing resilience in Arctic communities by providing a geoscientifically developed coupled permafrost & climate change response system that is based on ground- and satellite-based observations. This system will assist decision-making by providing real-time key geoscientific observations and access to short-term landslide prediction output. The aim is to achieve a better information basis for making decisions about infrastructure design and maintenance, and for use in preparedness situations in connection to potential landslides.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Detection of Fresh Supraglacial Deposits Through Change Detection Analyses on Sentinel-2 Multispectral Data and Sentinel-1 Polarimetric Information

Authors: Chiara Crippa, Dr. Mattia Callegari, Carlo Marin, Giovanni Cuozzo, Dr. Claudia Notarnicola
Affiliations: Eurac Research
Climate warming is a global phenomenon with particularly pronounced impacts in high-altitude mountainous regions. These areas are experiencing rapid transformations, with glacial and permafrost retreat producing increasingly evident direct effects on the landscape. One significant outcome of these changes is the growing frequency of landslides in glacial environments, which reflects a paraglacial response to ice loss and permafrost degradation. The accumulation of debris on glaciers alters their thermal regime, impacts debris distribution, and affects alpinist routes also endangering high altitude infrastructures like huts and bivouacs. Identifying the location and frequency of these events is thus a critical step for assessing areas vulnerable to destabilization and understanding their connection to external triggers. Here, we propose a tool implemented in Google Earth Engine and Python that combines spaceborne radar and multispectral data to detect and classify glacial changes, minimizing the limitations inherent in each sensor type. Our workflow, tested on the glacial surfaces of Vedretta della Miniera (Val Zebrù, Italy), Tscherva Glacier (Val Roseg, Switzerland), and Mount Cook (New Zealand) in different snow conditions, analyzes changes in the Normalized Difference Snow Index (NDSI) derived from Sentinel-2 (S2) within specific time ranges to extract preliminary debris maps. It then integrates Sentinel-1 (S1) backscatter information to fill information gaps caused by cloud coverage and provides refined information of surface changes. The methodology consists of two main analytical blocks. First, we filter Sentinel-2 optical images in a user defined time range and over the glacial surface (Randolf Glacier Inventory extent; RGI Inventory, 2023) inside the selected area of interest. We then exclude cloudy pixels and calculate the normalized difference snow index (NDSI) for all the others applying a threshold to differentiate between areas likely covered by snow or ice (NDSI>0.3) and those corresponding to bare rock (NDSI<0.3). To improve rock pixel classification accuracy and avoid misidentifying temporarily covered rocks such as nunataks and bedrock outcrops, we compare each pixel's NDSI value with its value during the closest maximum ablation period. This comparison excludes pixels that already show rock signatures when snow cover is at its minimum. The union of pixels thus identified generates an initial debris extension map, with uncertainties stemming from the individual steps of cloud detection, snow and rock recognition. According to cloud extent in the consider time span, especially during winter season, a lot of pixels can remain not classified and thus prevent a correct detection of surface changes. We thus consider Sentinel-1 GRD intensity images (Mulissa et al., 2021) selected within the same user-defined timespan to compute the VH backscattering backward difference (∆dB) over all those areas that have not been classifies as debris from S2 analysis. Using a size-dependent filter, tailored to the minimum landslide size we aim to detect, we outline discrete pixel clusters and compute the mean ∆VH within each of them. Clusters whose mean ∆VH falls outside the interquartile range (IQR3) of the cluster values in the image indicate the most prominent changes on the glacier. By comparing this value with the VH value from the previous year’s accumulation period and applying a classification method based on a Support Vector Machine (SVM) model, we can distinguish between ice and snow cover. This allows us to isolate pixels that are more likely associated with debris accumulation, rather than changes in snow or ice. Integrating S1 and S2 data enables the creation of a comprehensive map of new debris accumulation, minimizing uncertainties and reducing false positives. Our results demonstrate a strong correlation between the identified clusters and manually mapped landslide deposits, which were used as the reference extent. For instance, we observed a 90% overlap between the landslide cluster in Vedretta della Miniera and Mt. Cook, where the debris deposits were manually mapped shortly after the event. In contrast, the overlap with Mt. Scerscen was 50%, where manual mapping from satellite images was only possible two months post-event due to persistent snow and cloud coverage. This delay led to the remobilization and rearrangement of the initial debris deposit, affecting the accuracy of the overlap. The tool leverages open-source libraries and datasets, making it readily adaptable to other glacial environments.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: InSAR-based movement rate estimation and classification of rock glaciers in the Austrian Alps

Authors: Elena Nafieva, Daniel Hölbling, Emma Hauglin, Zahra Dabiri, Benjamin Aubrey Robson, Vanessa Streifeneder, Lorena Abad
Affiliations: Department of Geoinformatics – Z_GIS, University of Salzburg, Department of Earth Science, University of Bergen
Rock glaciers, which serve as critical indicators of permafrost dynamics and hydrological processes in alpine environments, are increasingly studied to understand their kinematics and response to climate change. Time series analysis of rock glacier velocities derived from Earth observation (EO) data enables the study of rock glacier behaviour, for example, seasonal and multi-year velocities, deformation rates and transitions from fast, chaotic glacial flow to more periglacial landforms with slow, spatially coherent velocities. In this study, we present a regional-scale analysis of rock glaciers for selected mountainous areas in Austria, using Sentinel-1 data and Interferometric Synthetic Aperture Radar (InSAR) techniques to derive movement rates. We propose a classification scheme based on kinematic behaviour and evaluate our results by comparing them to existing classifications. We will attach InSAR-derived movement rates to rock glacier delineations from an existing inventory created through manual interpretation (Wagner et al., 2020) as well as an inventory produced by deep learning techniques by the authors of this contribution within the project “ROGER” (EO-based rock glacier mapping and characterization). Thereby, we (1) assess the suitability of our results for confirming or disconfirming existing rock glacier classifications, (2) identify previously undocumented active rock glaciers, and (3) evaluate how the spatial patterns of movement rates align with rock glacier delineations automatically generated through deep learning. A key outcome of this research is the development of a classification scheme for rock glaciers based on their movement rates. By defining thresholds for distinct kinematic categories - ranging from inactive to active rock glaciers - we establish a framework that supports comparisons across regions. The classification also provides insights into the relationship between movement rates, topographic conditions, and geomorphological characteristics. In addition to understanding rock glacier dynamics, this study underscores the potential of InSAR technology for monitoring of alpine permafrost regions. The methodology and proposed classification scheme can be applied to other mountainous regions, supporting global efforts to assess permafrost stability in the context of climate change. We will present results that include spatial maps of rock glacier movement, examples of inventory validation and information enrichment (i.e. movement rates attached to rock glacier delineations), and the proposed classification framework. These insights can contribute to advancing our understanding of rock glacier behaviour, thereby supporting water resource management and hazard mitigation efforts in alpine environments. Wagner, T., Ribis, M., Kellerer-Pirklbauer, A., Krainer, K., Winkler, G., 2020. The Austrian rock glacier inventory RGI_1 and the related rock glacier catchment inventory RGCI_1 in ArcGis (shapefile) format [dataset]. PANGAEA. https://doi.org/10.1594/PANGAEA.921629
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Seven decades of change in the debris-covered Belvedere Glacier (Western Italian Alps)

Authors: Lukáš Brodský, PhD. Roberto Azzoni, Associate Professor Irene Bollati, Associate Professor Jan Kropáček, Prof. Marcus Nüsser, PhD. Susanne Schmidt, Prof. Vít Vilímek
Affiliations: Charles University, , Department of Applied Geoinformatics and Cartography, University La Statale of Milan, Earth Science Department, “A. Desio”, Charles University, Department of Physical Geography and Geoecology, Heidelberg University, Department of Geography, South Asia Institute (SAI)
The Belvedere Glacier in the Western Italian Alps has undergone notable alterations as a consequence of climate change, cryosphere dynamics, and related geomorphological processes. This study integrates findings from investigations by the research team to present a synthesis of glacier changes over seven decades (1951–2023), with a particular emphasis on surface evolution, elevation dynamics, supraglacial lake fluctuations, lateral moraine instability, and interactions with debris flows from tributary basins. The analysis of historical orthophotos, UAV imagery, and digital surface models has revealed three distinct phases of retreat: the disconnection of the Nordend Glacier between 1951 and 1991, the partial separation of the central accumulation basin between 2006 and 2015, and the subsequent detachment of the Locce Nord Glacier between 2018 and 2021. These changes, in conjunction with a surge-type event (1999–2002), have speeded up the glacial retreat and downwasting process, with rates comparable to those observed in the post-2000 period. The data on elevation indicate a significant increase in downwasting rates, from 0.24 meters per year between 1951 and 2009 to 1.8 meters per year between 2009 and 2023. This increase is associated with spatial heterogeneity, influenced by factors such as debris cover, meltwater flow, and supraglacial lake dynamics. Furthermore, the equilibrium line altitude of four glaciers in Monte Rosa Massif, including Belvedere Glacier, was mapped using Sentinel-2 data for the period 2016-2023. The variation of the resulting mean ELA was in the range of 340 m which likely reflects differences in slope orientation and amount of snow accumulation. The mean ELA for Belvedere Glacier was 3230 m a.s.l. The temporal pattern of ELA for Belvedere Glacier did not differ compared to the other glaciers despite the steep slope and frequent avalanching. Moreover, in August 2023, a debris flow from the Castelfranco tributary basin entered the Belvedere Glacier in the lowest left lobe contributing to erosion and opening of ice cliffs. These findings highlight the necessity for sustained and continuous observations, monitoring, and assessment of the impacts of debris-covered glaciers and slope movements on glacier surface and stability. Concurrently, the lateral moraine sliding in the vicinity of touristic infrastructure, with rates of 1.87–1.98 meters per year (2018–2023), underlines the significance of integrating remote sensing analyses with field surveys and dendrogeomorphological analysis in identifying potential precursors to ground failure. Supraglacial lakes, most notably Lake Effimero, exhibit fluctuating areas (428 m² to 99,700 m²) that are influenced by snowmelt and glacier dynamics. The formation of new lakes was observed to occur consistently, reflecting evolving hydrological conditions and the potential for the occurrence of outburst floods. The research highlights the potential of high and very high spatial resolution images to facilitate the detailed detection of glacier processes.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A Snow Reanalysis for the Central and Southern European Mountains Based on ESA-CCI Products

Authors: Esteban Alonso-González, Ignacio López-Moreno, Kristoffer Aalstad, Laura Sourp, Simon
Affiliations: CSIC
Snowpack plays a crucial role in numerous hydrological and ecological processes. Despite its mild conditions, Mediterranean regions of southern and central Europe host a vast array of mountain massifs of sufficient elevation to host deep and long lasting snowpacks. These snowpacks act as natural water reservoirs, storing fresh water resources during the colder months, releasing it when snowpack melts from late spring to early summer. The strong seasonality of the precipitation characteristics of climates under mediterranean climate influence, makes the snowmelt a critical resource since it synchronizes the availability with the demand of water during the drier months. Southern European countries are not only densely populated regions but European primary agricultural hubs, and thus the requirement of fresh water resources during early summer makes snowpack pivotal for sustaining European food production. Moreover, given the complex topography of Southern European countries, they exhibit a great potential for hydropower generation, provided sufficient water is available the appropriate availability of water. Thus, fresh water resources stored in the snowpacks in these regions are a relevant component of the southern European hydropower generation industry. Mediterranean regions are identified as a climate change hotspot, with projections highlighting its vulnerability to ongoing warming. Thus, snowpack is significantly threatened by climate change, which could drastically reduce its extent and magnitude. Given the close to isothermal conditions of the snowpack during most of the year and the often mild climatic conditions of snow falls occurring close to 0ºC conditions over extensive areas of these mediterranean mountain ranges, even low increments of temperature may exert drastic changes in snowpack dynamics. Despite this, the snowpack in the temperate regions of Southern Europe remains under-studied, often overshadowed by more extensively studied or iconic mountain ranges such as the Alps or polar regions. This underscores the need for greater scientific attention to the hydrological dynamics of southern European mountain ranges like Sierra Nevada (Spain), Corsica mountains or Balcan mountain ranges just to name a few. However, the lack of snowpack data in most of these regions, poses a significant challenge for researchers and water managers interested in long term snowpack dynamics. Here we present a new snow reanalysis, covering all the mountain massifs of Southern Europe, and the Central European mountain ranges of the Carpathians and the Alps. The product was developed using the Multiple Snow data Assimilation system (MuSA), ingesting observations, jointly with its uncertainties, generated in the frame of the snow and land surface temperature European Space Agency Climate Change Initiatives (ESA-CCI), into an ensemble of simulations generated by an intermediate-complexity numerical snowpack model, the Flexible Snow Model (FSM2). The reanalysis covers the period 2000-2020 at a resolution of approximately 1 km. We present the first validation results of the dataset using snowpack in situ and remotely sensed observations in the Pyrenees, proving its inter- and intra-annual consistency with the available observations.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Remote sensing based early detection approaches for Glacial Lake Outburst Floods susceptibility: A case study of the 2024 Thyanbo Glacial Lake Outburst Flood near Thame (Nepal) using Persistent Scatterer Interferometry with Sentinel-1 imagery

Authors: Niels Dedring, Jun.-Prof. Dr. Andreas Rienow, Jun.-Prof. Dr. Valerie Graw
Affiliations: Ruhr University Bochum, Institute of Geography
There is a clear link between global warming and the increase of glacier melting, leading to the expansion of glacial lakes dammed by fragile moraines which represent unstable glacial deposits often partially frozen. Triggers such as heavy rainfall, earthquakes, landslides, avalanches, glacier breakoffs, or thawing permafrost can cause glacial lake outburst floods (GLOFs). These events result in moraine breaches, releasing flood waves of mud and debris that can cause significant damage and endanger populations far downstream. On August 16. 2024, a GLOF from the Thyanbo glacial lake primarily affected the village Thame in the Namche region of Solukhumbu district, Nepal. This flood implied a huge destruction of the local infrastructure, buildings and agricultural land, and displaced over 135 inhabitants. After first investigations, it seems that an initial trigger was originating from the Ngole Cho glacial lake and overtopped its terminal moraine. This flood wave further run into the Thyanbo glacial lake, which overtopped the terminal moraine and caused it to breach. All mentioned cascading incidents triggered the GLOF running downstream. Even though, no casualties where claimed, the International Charter Space and Major Disasters was activated three days after the GLOF (19.08.2024), which underlines the urgent need for investigation and research with support of Earth Observation Data. The here presented study, analyses recent and past dynamics of all glacial lakes in the Thame Khola valley and puts an emphasis on the GLOF event from August 2024. Integrating Sentinel-1 & -2 as well as high-resolution PlanetScope satellite data the lakes area, volume estimation and frozen-state periods are determined. To get a better image about the exact cause and course of the GLOF, further a change detection and runoff estimation will be performed. With Persistent Scatterer Interferometry (PSI) using Sentinel-1 SLC data, ground movements can be tracked and detected over time with millimetre accuracy in perpendicular directions. The PSI technique enables the measurement of displacements at identified Persistent Scatterers in SAR datasets, which mostly correspond to consistent features in the landscape such as man-made structures, natural rock outcrops, and exposed geological formations. This study aims to determine whether PSI could have predicted the collapse of the moraine and the resulting GLOF, as well as the time scales involved. The calculation of the PSI is carried out by using the Standford Method for Persistent Scatterers (StaMPS) based on Sentinel-1 data. If this remote sensing approach turns out to be sufficient in early detection of instable moraines, PSI could help to improve the identification and classification of potential dangerous glacial lakes within the whole Hindukush-Himalaya-Region and could be integrated into early warning systems for outburst susceptibility in future studies.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Assimilation of Satellite Retrieved Snow Depth (SD) and Snow Water Equivalent (SWE) Into a Snow Model

Authors: Ezra Beernaert, Dr. Kari Luojus, Dr. Hans Lievens
Affiliations: Hydro-Climate Extremes Lab (H-CEL), Ghent University, Finnish Meteorological Institute
Satellite observations of the snow water equivalent (SWE) in the world’s mountain ranges are still lacking. This observation gap hinders the accurate estimation of total seasonal water storage in snow. To address this gap, a physical snow model can be implemented to obtain daily snow depth (SD) and SWE estimates for large regions. To generate a high-resolution dataset of SWE using a snow model, high-resolution meteorological forcings are required. The Multi-Source Weather (MSWX) and Multi-Source Weighted-Ensemble Precipitation (MSWEP) datasets are used. This 3-hourly forcing data, with a 0.1° resolution, was downscaled to a resolution of 500 meter to account for the terrain influences. Different options for the downscaling procedures (with the main focus on precipitation, temperature and solar radiation) were explored and evaluated to select the combination of methods that resulted in the best possible SD and SWE simulations. A physically based snow model (SnowClim, developed by Lute et al., 2022) was calibrated to further optimize the simulations. Through data assimilation, further improvement of the model simulations of SD and SWE is possible. For mountainous regions, Sentinel-1 SD retrievals (Lievens et al., 2019, 2022) can be utilized. For non-mountainous regions, in the northern hemisphere, the GlobSnow SWE dataset is available (Luojus, K., Pulliainen, J., Takala, M. et al., 2021). Here, we first investigated the assimilation of Sentinel-1 SD over the European Alps. The assimilation was found to improve the performance of the SD and SWE estimates compared to those based on the model or satellite observations alone. In a second case study, the model and data assimilation framework is extended to assimilate simultaneously Sentinel-1 SD over mountain regions and GlobSnow over non-mountainous regions in Scandinavia (Norway, Sweden and Finland). The results of our study demonstrate the advantage of combining satellite information with the physically based snow model for daily, high-resolution and area-wide SWE estimation.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A.06.02 - POSTER - Enhancing Space Weather Understanding: Insights from LEO Satellite-Based Operational and Pre-Operational Products

Space weather and space climate refer to the interactions between the Sun and Earth over timescales ranging from minutes to decades. Predicting extreme space weather and developing mitigation strategies is crucial, as space assets and critical infrastructures, including satellites, communication systems, power grids, aviation, etc., are vulnerable to the space environment.

This session focuses on assessing the current status of the space weather forecast and nowcast products obtained from LEO satellite measurements, alongside other missions and ground-based technologies, and pushing forward with innovative concepts. We strongly encourage contributions that promote a cross-disciplinary and collaborative approach to advancing our understanding of space weather and space climate. Moreover, we welcome presentations that investigate the effects of space weather on diverse applications in Earth's environment, such as space exploration, aviation, power grids, auroral tourism, etc.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Dynamical Complexity in Swarm-derived Storm and Substorm Indices Using Information Theory: Implications for Interhemispheric Asymmetry

Authors: Constantinos Papadimitriou, Dr. Georgios Balasis, Dr. Adamantia Zoe Boutsi, Dr. Omiros Giannakis
Affiliations: National Observatory Of Athens - IAASARS, National and Kapodistrian University of Athens
In November 2023, the ESA Swarm constellation mission celebrated 10 years in orbit, offering one of the best-ever surveys of the topside ionosphere. Among its achievements, it has been recently demonstrated that Swarm data can be used to derive space-based geomagnetic activity indices, like the standard ground-based geomagnetic indices, monitoring magnetic storm and magnetospheric substorm activity. Given the fact that the official ground-based index for the substorm activity (i.e., the Auroral Electrojet – AE index) is constructed by data from 12 ground stations, solely in the northern hemisphere, it can be said that this index is predominantly northern, while the Swarm-derived AE index may be more representative of a global state, since it is based on measurements from both hemispheres. Recently, many novel concepts originated in time series analysis based on information theory have been developed, partly motivated by specific research questions linked to various domains of geosciences, including space physics. Here, we apply information theory approaches (i.e., Hurst exponent and a variety of entropy measures) to analyze the Swarm-derived magnetic indices around intense magnetic storms. We show the applicability of information theory to study the dynamical complexity of the upper atmosphere, through highlighting the temporal transition from the quiet-time to the storm-time magnetosphere around the May 2024 superstorm, which may prove significant for space weather studies. Our results suggest that the spaceborne indices have the capacity to capture the same dynamics and behaviors, with regards to their informational content, as the traditionally used ground-based ones. A few studies have addressed the question of whether the auroras are symmetric between the northern and southern hemispheres. Therefore, the possibility to have different Swarm-derived AE indices for the northern and southern hemispheres respectively, may provide, under appropriate time series analysis techniques based on information theoretic approaches, an opportunity to further confirm the recent findings on interhemispheric asymmetry. Here, we also provide evidence for interhemispheric energy asymmetry based on the analyses of Swarm-derived auroral indices AE North and AE South.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The 10-11 May 2024 Geomagnetic Storm in the light of Swarm Observations

Authors: Balázs Heilig, Veronika Barta, Kitti Berényi, Máté Tomasik, Tamás Bozóki
Affiliations: HUN-REN Institute of Earth Physics and Space Science, Eötvös Loránd University, Institute of Geography and Earth Sciences, Department of Geophysics and Space Science, Space Research Group, HUN-REN – ELTE Space Research Group
Swarm observations provide a wide range of observations and data products supporting the investigation of magnetosphere-ionosphere coupling processes. A special area of these coupling processes is the subauroral ionosphere conjugated to the plasma boundary layer. In this paper, we demonstrate how Swarm observations could provide insight into storm-time dynamic processes. The latest geomagnetic superstorm, the 10-11 May 2024 event serves as an example. During this event, the Swarm A/C pair orbited in the 07/19 MLT sector, while Swarm B explored the pre-noon/pre-midnight MLT sector. Magnetic and electric field observations, observations of plasma structures, and field-aligned, ionospheric and magnetospheric currents provide a rich and complex context for the interpretation of the evolving processes. The 10-11 May 2024 event was extreme in many aspects, some of which will be presented in this contribution.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Comparative analysis of socioeconomic impacts of space weather: High vs. Mid-latitude vulnerabilities and mitigation strategies

Authors: Giulia Abbati, Sara Mainella, Pietro Vermicelli
Affiliations: Istituto Nazionale Di Geofisica E Vulcanologia, SpacEarth Technology SRL
Space weather extreme phenomena are increasingly recognized as a significant global economic risk [1]. Events such as solar flares and coronal mass ejections can directly disrupt critical spaceborne and ground-based infrastructures, including satellites, radiocommunications, and power grids, leading to service interruptions that may result in billions of euros in damages. Although these potential damages undermine societal stability and the economic sustainability of infrastructures, their owners, and the global economy, the scientific literature on this topic remains scarce. As Baker (2009) emphasizes [2], this scarcity stems from the complexity of addressing this research challenge, which requires an interdisciplinary approach that integrates scientific, engineering, economic, and social perspectives, as well as a general lack of data that prevents researchers from creating precise predictive models. However, an accurate understanding of the socioeconomic impact of space weather is fundamental to the development of appropriate resilience and mitigation strategies. In our previous study [3], we focused on high-latitude regions, as the interactions between the Earth's magnetic field and charged particles from the solar wind make these areas particularly susceptible to upper atmosphere phenomena (UAP), significantly affecting local technological infrastructures and energy systems. The socioeconomic impact of these events depends on both the latitude at which they occur and the severity of the event itself. Generally, high latitudes are more affected by these impacts, but extreme events can also cause significant damage at mid-latitudes. To illustrate this, we present a case study comparing two notable geomagnetic storms, focusing on their effects on GNSS receivers and the precision agriculture sector. This analysis explores and relates the findings of our previous study on the geomagnetic storm that occurred during Solar Cycle 24, known as the St. Patrick's Day storm, with those of a more recent geomagnetic storm that took place in May 2024, during Solar Cycle 25. The St. Patrick's Day storm on March 17, 2015, was the most severe geomagnetic storm of the 24th solar cycle, with a Dstmin index of approximately -226 nT, classifying it as an event with an annual occurrence probability [4]. During the main phase of the storm, high-intensity Medium Scale Traveling Ionospheric Disturbances (MSTIDs) led to a clear decrease in positional accuracy, exceeding 1.5 m for all components, with accuracy degraded to the point that positioning was impossible for over three hours. Considering that most GNSS applications in precision agriculture require accuracy below 1 m, and that an hour of GNSS outage costs this sector approximately €200,000, the estimated cost of the St. Patrick’s Day storm for precision agriculture (PA) is around €600,000 [5]. By contrast, the "Mother's Day Storm" of May 2024, triggered by an X8.7-class solar flare from Sunspot AR3664, peaked with a Dstmin of -412 nT, and it can therefore be classified as an event with once per ten years occurrence probability [4]. As one of the most powerful storms of the current solar cycle, it serves as a significant benchmark event for assessing the vulnerability of modern technologies to space weather threats [6]. During this recent geomagnetic storm, a significant decrease in TEC (Total Electron Content) was observed, indicating a strong ionospheric disturbance, along with an increase in ROTI (Rate of TEC change Index), signaling ionospheric irregularities [6]. These irregularities, combined with scintillation effects detected by GNSS receivers, impacted satellite signal propagation, compromising navigation and communication accuracy. Although the underlying UAP were different, also this geomagnetic storm of May 2024 severely affected the precision of GNSS-dependent systems used in agriculture. Users reported deviations of up to 25 cm in tractor guidance lines, despite PDOP values suggesting high precision, leading some farmers to suspend planting activities [7]. A comparative analysis of these two storms provides the opportunity to evaluate the effects at mid-latitudes of space-weather events causing GNSS disturbances in the PA sector, which is gaining importance in Central Europe [9]. The fact that the GNSS disturbances originate in UAP of different nature calls for an in-depth analysis of the underlying physical processes to assess the expected duration and region of occurrence of the service outages, promoting better mitigation and resilience strategies. Given the increasing risk posed by space weather, and the expectation that the probability of such events will continue to rise over time [8], it is crucial to assess the potential impacts of these phenomena - both in terms of economic and social consequences - and to evaluate the availability of current technological solutions capable of mitigating the associated damage. Through a comparative analysis of these storms, this study seeks to deepen our understanding of the consequences of space weather at different latitudes, identifying specific vulnerabilities in technological infrastructures and quantifying, where possible, the associated economic impacts in PA sector. The findings provide new insights into how the intensity and geographical location of UAP influence GNSS technological systems and may contribute to enhancing the resilience related to PA sector at mid-latitudes. [1] Eastwood, J. P., Biffis, E., Hapgood, M. A., Green, L., Bisi, M. M., Bentley, R. D., Wicks, R., McKinnell, L. A., Gibbs, M., & Burnett, C. (2017). The Economic Impact of Space Weather: Where Do We Stand?. Risk analysis : an official publication of the Society for Risk Analysis, 37(2), 206–218. https://doi.org/10.1111/risa.12765 [2] Baker, D. N. (2009), What Does Space Weather Cost Modern Societies?, Space Weather, 7, S02003, doi:10.1029/2009SW000465. [3] P. Vermicelli, S. Mainella, L. Alfonsi, A. Belehaki, D. Buresova, R. Hynonen, V. Romano, B, Witvliet “The Socioeconomic Impacts of the Upper Atmosphere Effects on LEO Satellites, Communication and Navigation Systems,” doi:10.5281/zenodo.66714242. [4] M. Ishii et al., “Space weather benchmarks on Japanese society,” Earth, Planets Sp., 73, 1, 2021, doi: 10.1186/s40623-021-01420-5. [5] Mainella, S., Vermicelli, P., & Urbar, J. (2024, May 19-24). Quantifying the socioeconomic impacts of Space Weather in Europe: How costly is the effect of Medium Scale Traveling Ionospheric Disturbances on GNSS positioning? Paper presented at the 4th URSI AT-RASC, Gran Canaria, Spain. [6] Spogli, L., Alberti, T., Bagiacchi, P., Cafarella, L., Cesaroni, C., Cianchini, G., Coco, I., Di Mauro, D., Ghidoni, R., Giannattasio, F., Ippolito, A., Marcocci, C., Pezzopane, M., Pica, E., Pignalberi, A., Perrone, L., Romano, V., Sabbagh, D., Scotto, C., Spadoni, S., Tozzi, R. and Viola, M. (2024) “The effects of the May 2024 Mother’s Day superstorm over the Mediterranean sector: from data to public communication”, Annals of Geophysics, 67(2), p. PA218. doi: 10.4401/ag-9117. [7] LandMark Implement. (2024, May 11). Geomagnetic storm affecting GPS signals - May 2024. https://landmarkimp.com/news/news/blog/geomagnetic-storm-affecting-gps-signals--may-2024/ [8] Consilium. (2023, November 21). Solar storms: A new challenge on the horizon. Council of the European Union. https://www.consilium.europa.eu/media/68182/solar-storms_a-new-challenge-on-the-horizon-21-nov-2023_web.pdf [9] Bojana Petrović, Roman Bumbálek, Tomáš Zoubek, Radim Kuneš, Luboš Smutný, Petr Bartoš, Application of precision agriculture technologies in Central Europe-review, Journal of Agriculture and Food Research, Volume 15, 2024,101048, ISSN 2666-1543, https://doi.org/10.1016/j.jafr.2024.101048.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Swarm as Space Weather mission: L1 and L2 Fast data processing

Authors: Roberta Forte, Enkelejda Qamili, Vincenzo Panebianco, Lars Tøffner-Clausen, Stephan Buchert, Christian Siemes, Jonas Bregnhøj Lauridsen, Guram Kervalishvili, Jan Rauberg, Alessandro Maltese, Anna Mizerska, Florian Partous, Maria Jose Brazal Aragón, Maria Eugenia Mazzocato, Giuseppe Albini, Antonio De la Fuente, Anja Stromme
Affiliations: Serco For Esa, DTU Space, Swedish Institute of Space Physics, TU Delft, GFZ, GMV Poland, ESA - ESOC, ESA - ESRIN
After more than a decade in Space, ESA’s Earth Explorer Swarm mission is still in excellent shape and continues to contribute to a wide range of scientific studies, from the core of our planet, via the mantle and the lithosphere, to the ionosphere and interactions with Solar wind. Its highly accurate observations of electromagnetic and atmospheric parameters of the near-Earth space environment, and the peculiar mission constellation design, make Swarm eligible for developing novel Space Weather products and applications. In 2023 a “Fast” processing chain has been transferred to operations, providing Swarm Level 1B products (orbit, attitude, magnetic field and plasma measurements) with a minimum delay with respect to the acquisition. In 2024 also the generation of Swarm Level 2 products (Field Aligned Current, Total Electron Content) have been implemented in the “Fast” chain and are available on Swarm dissemination server. These “Fast” data products add significant value in monitoring present Space Weather phenomena and help modelling and nowcasting the evolution of several geomagnetic and ionospheric events. This work presents the set-up of the Swarm “Fast” data processing chain, current status and plans for future improvements and applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: C.02.06 - POSTER - Swarm - ESA's extremely versatile magnetic field and geospace explorer

This session invites contributions dealing specifically with the Swarm mission: mission products and services, calibration, validation and instrument-related discussions. It is also the session in which the future and evolution of the mission, and the future beyond Swarm will be discussed. Particularly welcome are contributions highlighting observational synergies with other ESA and non-ESA missions (past, current and upcoming), in addition to ground-based observations and modelling.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Multi-Scale Irregularities Product (m-SIP): a data product utilizing the high-resolution Swarm plasma density data for space weather applications

Authors: Yaqi Jin, Dr. Wojciech Miloch, Dr. Daria Kotova, Dr. Luca Spogli, Dr. Rayan Iman, Lucilla Alfonsi
Affiliations: University of Oslo, Istituto Nazionale di Geofisica e Vulcanologia
Nowadays it is crucial to monitor and forecast space weather conditions, and in particular variations in the near-Earth space environment which are induced by the Solar Wind-Magnetosphere-Ionosphere-Thermosphere interactions. They can affect critical infrastructures and services that include communications and navigation systems such as Global Navigational Satellite System (GNSS), as well as satellite operations. However, as for now there is no space weather product that can monitor and predict the global space weather impact on the GNSS users. In this regard, the Swarm mission can contribute by its high-resolution faceplate (FP) plasma density measurements. We present the Swarm multi-scale irregularities product (m-SIP). m-SIP is a Swarm-based data product that can characterize small-scale irregularities (< 10 km) and at different scales down to near the Fresnel’s scale (~400 m), which is particularly useful for the GNSS users. The new data product m-SIP consists of two parts: 1) The derived plasma density parameters at small spatial scales, e.g., the rate of change of density in the 1-second window (RODI1s), density gradients at 5 km and at 10 km, and the spectral slope of power spectral density; 2) The modelled S4 index based on the phase screen model. The new data product will be useful for monitoring the space weather impact for the GNSS users. Due to its long availability around the globe at all latitudes, it would ease the development of specific models to characterise small-scale irregularities in the ionospheric plasma density and their impact on the GNSS services. In addition, the data product is useful for improving the fundamental understanding of ionospheric processes and formation of plasma irregularities. It contains parameters necessary for characterizing plasma irregularities at multiple scales and the energy cascading across different spatial scales (including the spectral slope) down to near the Fresnel's scale of GNSS signals. It can be used to study the turbulent ionosphere at both high and low latitudes. It can also provide additional parameters that will allow a quick assessment of the space weather conditions in the ionosphere.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: A World without Low Earth Orbit High-Precision Magnetometry

Authors: Dr. Guram Kervalishvili, Ingo Michaelis, Dr. Martin Rother, Dr. Maximilian A. Schanner, Prof. Christopher C. Finlay, Dr. Clemens Kloss, Dr. Monika Korte, Enkelejda Qamili, Jan Rauberg
Affiliations: GFZ German Research Centre For Geosciences, DTU Space, Technical University of Denmark, SERCO for European Space Agency (ESA-ESRIN)
High-precision magnetometry is essential for monitoring Earth's magnetic field, enabling breakthroughs in understanding the dynamics of the core, lithosphere, and magnetosphere. Missions like Ørsted, CHAMP (CHAllenging Minisatellite Payload), and ESA's Swarm constellation have demonstrated the critical value of high-precision vector field and scalar magnetometer measurements carried out with absolute accuracy in Low Earth Orbit (LEO). Now, imagine a world where the satellites or instruments of dedicated geomagnetic field missions in LEO reach the end of their operational lifetimes, whether expected or unexpected, lacking new missions to replace them. Without the unique insights provided by missions like Ørsted, CHAMP, and Swarm, we would lose a critical, high-resolution perspective of Earth's magnetic environment, which reveals fluctuations and shifts that would otherwise remain unresolved. Moreover, data from dedicated magnetic scientific missions play a crucial role in calibrating platform magnetometers on satellites not dedicated to magnetic measurements. While these platform magnetometers are functional, they lack the precision needed to detect fine-scale variations. Without the rigorous calibration provided by high-precision magnetic missions providing measurements with absolute accuracy, the data that platform magnetometers produce is less reliable, introducing inconsistencies and inaccuracies across datasets. Here, we explore the consequences of losing high-precision, absolute accuracy magnetometry capabilities in LEO for calibrating platform magnetometers on satellites not dedicated to magnetic measurements. While it would still be possible to generate reference geomagnetic data using less accurate sources, e.g., ground-based observatory networks, these alternatives lack spatial and temporal resolution provided by LEO-based measurements. As a result, the derived geomagnetic models would suffer from diminished resolution and accuracy, reducing their overall reliability and scope. Such degraded models would, in turn, propagate inaccuracies in the calibration of platform magnetometers, undermining their precision. This cascading effect would significantly hinder our ability to monitor, understand, and model the dynamic geomagnetic field, particularly the core, lithosphere, and magnetosphere. Maintaining accurate, high-precision, magnetometry in LEO is therefore essential for preserving the integrity of geomagnetic science and supporting its diverse scientific and practical applications.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: 11 years of Swarm PDGS Operations: Lessons Learned

Authors: Alessandro Maltese, Livia D'Alba, Antonio De La Fuente, Danilo Parente
Affiliations: Serco, ESA, Starion
Contrary to the common perception that operational ground segments are static and conservative by nature, the architecture of the Swarm PDGS has been constantly evolving. This evolution aims to respond to new operational and scientific requirements, such as the need for faster data delivery and the integration of innovative data processing algorithms. It also involves incorporating new science products that were not initially foreseen, demonstrating the system's flexibility and adaptability. Efforts have been made to improve the robustness, maintainability, and efficiency of the current system using the latest available techniques, including adopting modern software development practices and leveraging cutting-edge technologies. These ongoing enhancements ensure that the PDGS remains at the forefront of technological advancements, providing high-quality data services to the global scientific community. This poster summarizes the efforts of the Swarm PDGS operations support team over the last 11 years in several areas. Key initiatives include the evolution and streamlining of the system architecture to ensure long-term maintainability, which involved reengineering components to simplify future upgrades and reduce technical debt. The migration from a physical to a virtual infrastructure was another significant milestone, enhancing scalability, reducing operational costs, and increasing system resilience. The team also focused on the flexible provision of required storage and processing power for full mission reprocessing campaigns, enabling the system to handle increased data volumes efficiently. Improvements in monitoring and reporting subsystems have provided better insights into system performance, facilitating proactive maintenance and quicker issue resolution. The integration of additional new data products from the Swarm DISC Processing Centres and other missions has expanded the data portfolio available to researchers, fostering interdisciplinary studies and collaboration. Strengthening and enhancing system security has been a continuous priority, addressing emerging cyber threats and ensuring the integrity and confidentiality of the data. The implementation of the FAST platform marked a significant advancement, providing near real-time data access and opening new possibilities for time-sensitive applications like space weather monitoring. Changes in the contractual approach have introduced greater flexibility, allowing for more agile responses to evolving mission needs and technological developments. Finally, this contribution provides a synthesis of the main lessons learned during this period. It highlights how adaptability, continuous improvement, and close collaboration among all stakeholders are essential for the success of such a complex and long-term mission. The experiences gained offer valuable insights for future missions and ground segment developments, demonstrating that with the right approach, operational ground segments can be dynamic, innovative, and responsive to the ever-changing demands of the scientific community.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: VirES: Data and model access for the Swarm mission and beyond

Authors: Martin Pačes, Ashley Smith
Affiliations: EOX IT Services, GmbH, University of Edinburgh
The VirES service [1] has been developed to make Swarm products accessible to programmers and non-programmers alike. The overall project combines web services to robustly access and process data and models on demand, a graphical interface that enables easy exploration and visualisation of products, and Python tooling to allow more flexible operation and foster community-developed tools. The web client GUI provides both 3D visualisation and customisable 2D plotting, allowing data exploration without any programming required. On the other hand, the Jupyter-based Virtual Research Environment (VRE) [2] and ready-to-run Jupyter notebooks [3] provide the more intrepid explorer the opportunity to generate more bespoke analysis and visualisation. The notebooks are backed by a JupyterHub furnished with domain-relevant Python packages, which together lower the barrier to entry to programming. Both the web client and notebooks are interlinked with the Swarm handbook [4] which provides more detailed documentation of products. The VirES server can be accessed through Open Geospatial Consortium (OGC) APIs using the viresclient Python package [5], as well as through the Heliophysics API (HAPI) [6]. The availability of both APIs offers both flexibility and interoperability, enabling a variety of usage patterns both for researchers and for integration with external data systems. While the service was originally developed to serve the Swarm satellite data, we also provide access to ground magnetic observatory data derived from INTERMAGNET, as well as Swarm "multimission" products derived from other spacecraft as part of Swarm projects. VirES is developed for ESA by EOX IT Services [7], in close collaboration with researchers across the Swarm Data, Innovation, and Science Cluster (DISC). We aim to produce a sustainable ecosystem of tools and services, which together support accessibility, interoperability, open science, and cloud-based processing. All services are available freely to all, and the software is developed openly on GitHub [8,9]. [1] https://vires.services [2] https://vre.vires.services [3] https://notebooks.vires.services [4] https://swarmhandbook.earth.esa.int/ [5] https://viresclient.readthedocs.io/ [6] https://vires.services/hapi [7] https://eox.at [8] https://github.com/ESA-VirES [9] https://github.com/Swarm-DISC
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Swarm Magnetic Data Evaluated Through Comprehensive Inversion of Earth's Magnetic Field

Authors: Lars Tøffner-Clausen
Affiliations: DTU Space
Through more than 11 years the Swarm Mission has demonstrated leading class quality of its measurements of the magnetic field surrounding Earth. Though, the Sun induced magnetic disturbance, denoted dB_Sun, is known to have been imperfectly characterised so far. Even though we do not envisage a perfect characterisation of the dB_Sun disturbance, recent analyses have shown promising progress in a better understanding and characterisation of the disturbance. This progress has been supported by careful analysis of the magnetic data residuals with respect to models of the magnetic fields surrounding Earth. Here, we present the latest achievements in a better understanding and characterisation of dB_Sun through the analysis of data residuals versus the Comprehensive Inversion of Earth's magnetic field. The Comprehensive Inversion (CI) approach constitutes a simultaneous modelling of the magnetic fields from Earth's fluid core, lithosphere, ionosphere, magnetosphere, as well as induced magnetic fields in the tidal motion of the oceans. For the ionosphere and magnetosphere both the direct magnetic field as well as their counterparts induced in Earth's mantle are included.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Enhanced Swarm-Based Climatological Models of the Non-Polar Geomagnetic Daily Variations

Authors: Arnaud Chulliat, Louis Chauvet, Gauthier Hulot, Robin Duchene, Martin
Affiliations: CIRES, University Of Colorado Boulder
Climatological models of the non-polar geomagnetic daily variations have a variety of uses, from studying ionospheric electrical current systems to correcting magnetic field survey data. Several such models were produced as part of the Dedicated Ionospheric Field Inversion (DIFI) project throughout the Swarm satellite mission. Here we present the latest version of the DIFI model, DIFI-8, inferred from ten years of Swarm Alpha and Bravo magnetic field measurements. We also present a new version of the Extended DIFI model, xDIFI-2, inferred from Swarm, CHAMP and observatory data and covering 2001-2023. Like their predecessors, these new models provide both the primary and induced magnetic fields generated by mid-latitude Sq currents and the Equatorial Electrojet (EEJ) within +/- 55 degrees quasi-dipole latitudes, at both ground and Low-Earth Orbit satellite altitudes. In addition, they include new features, such as data preprocessing that incorporates corrections for toroidal magnetic fields based on a recently published climatological model (Fillion et al., 2023). Finally, they have been extensively validated against independent, ground-based observatory data.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Characterization of the ionospheric perturbation degree at mid-scales with Swarm's NeGIX and TEGIX

Authors: J. Andrés Cahuasquí, Mainul Hoque, Norbert Jakowski, Dmytro Vasylyev, Stephan Buchert, Martin Kriegel, Paul David, Grzegorz Nykiel, Youssef Tagargouste, Lars Tøffner-Clausen, Jens Berdermann
Affiliations: German Aerospace Center (DLR), Swedish Institute of Space Physics (IRF), Technical University of Denmark (DTU)
Since their launch in November 2013, the European Space Agency's (ESA) Swarm mission has delivered unprecedented data products and services that have significantly enhanced our understanding of solar, magnetospheric, thermospheric, ionospheric, and atmospheric processes, as well as their coupling and impact on human-made technological systems. Currently, the Swarm Product Data Handbook includes 68 Level 1 and Level 2 data products derived from Swarm measurements, along with over 20 additional products obtained from other spacecraft. All the activity is curated by the Swarm Data, Innovation, and Science Cluster (DISC). Recently, two novel data products have been added to the Swarm data family: the electron density gradient ionospheric index (NeGIX) and the total electron content gradient ionospheric index (TEGIX). These products implement a temporal and spatial combination of measurements from Swarm A and Swarm C along their near-polar, parallel orbits. NeGIX and TEGIX enable the investigation of ionospheric plasma irregularities and perturbations at mid-scales, on the order of 100 km, not only along the meridional transit direction of the Swarm satellites but also along the longitudinal (zonal) direction. Consequently, the space-based observations from Swarm, combined with the methodologies of NeGIX and TEGIX, provide new insights into several important topics in space weather research. Indeed, initial studies using these products have demonstrated their effectiveness in applications such as scintillation modeling, characterizing ionospheric plasma bubbles, and monitoring ionospheric indices in combination with ground-based observations. In this work, we provide a comprehensive assessment of the capabilities of NeGIX and TEGIX to characterize the ionospheric state under both quiet and stormy geomagnetic conditions. We examine several of the most intense geomagnetic events from solar cycles 24 and 25. Furthermore, with over ten years of Swarm data available, a climatological analysis of the ionosphere has been conducted using these newly-developed indices. Such analysis forms a basis for future modeling and combined studies, while also supporting the development of improved proxies for characterizing ionospheric behavior and enabling their practical use in navigation, communication, and remote sensing systems.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: State of the art of Swarm mission: Instrument performances, Data Quality and Algorithm evolution

Authors: Vincenzo Panebianco, Roberta Forte, Enkelejda Qamili, Lars Tøffner-Clausen, Stephan Buchert, Johnathan Burchill, Dr.ir. Christian Siemes, Anna Mizerska, Jonas Bregnhøj Lauridsen, Thomas Nilsson, Alessandro Maltese, Maria Eugenia Mazzocato, Florian Partous, María José Brazal Aragón, Lorenzo Trenchi, Elisabetta Iorfida, Irene Cerro, Berta Hoyos Ortega, Antonio De la Fuente, Anja Stromme
Affiliations: Serco for ESA, DTU Space, Swedish Institute of Space Physics, University of Calgary, TU Delft, GMV Poland, European Space Agency, ESTEC, European Space Agency, ESRIN
Swarm mission marked more than a decade in orbit, representing a transformative achievement in our exploration and understanding of Earth's geomagnetic field, the ionosphere, and electric currents. Launched in 2013 by the European Space Agency (ESA) as a three-satellite constellation, Swarm was initially designed to provide unprecedented insights into Earth’s magnetic field and its interactions with the surrounding space environment. Over the years, the mission has consistently exceeded its original goals, delivering groundbreaking scientific results and enabling a host of innovative applications that extend far beyond its initial scope. A defining feature of the Swarm mission is its commitment to continuous improvement. Since its launch, advancements in data processing algorithms have played a vital role in ensuring the mission remains at the cutting edge of scientific discovery. These updates have not only maintained the exceptional quality of Swarm's measurements but have also allowed the mission to evolve in response to the changing needs of the scientific community. An overview of the Swarm mission status, highlighting the remarkable performance of its instruments and the ongoing enhancements in data processing algorithm is presented. These refinements have not only strengthened Swarm’s contributions to our understanding of magnetic fundamental Earth processes but have also supported the development of novel Swarm-based data products and services, further broadening the mission’s impact.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Swarm Accelerometer as a Component in Derivation of the Non-Gravitational Forces Acting on the Spacecraft

Authors: Sergiy Svitlov, Dr. Christian Siemes, Dr. Elisabetta Iorfida, M.Sc. Daniel Rotter
Affiliations: Micro-enterprise 'Sergiy M. Svitlov', Delft University of Technology, European Space Agency (ESA), ESTEC
Swarm is an ESA’s Earth Explorer mission in orbit since November 2013, consisting of three identical satellites (Swarm A, B, and C) in near-polar Low Earth Orbits. While its primary objective is to study Earth’s magnetic field and its temporal evolution, the Swarm satellites also carry GPS receivers and accelerometers as part of their scientific payload. In addition to providing the precise position and time for the magnetic field measurements, the GPS receivers are used to determine the non-gravitational forces acting on the spacecraft, from which the thermospheric neutral densities can be derived. The accelerometers are intended to measure those forces directly and with a much higher resolution. However, the Level 1B accelerometer data are not released to the public due to heavy distortions in the raw measurements, which render them to be useless in their unprocessed form. Instead, Level 2 calibrated accelerometer data are prepared and released, having undergone a series of manyfold corrections to compensate for the distortions. To exploit the advantages of both techniques, hybridised non-gravitational accelerations (Level 2 accelerometer data) are constructed as a combination of the low-pass filtered POD-derived accelerations and high-pass filtered pre-corrected raw accelerometer data. This hybrid approach ensures that the final accelerometer data products are of high scientific value and reliability. In this presentation, we report details on the sophisticated Level 2 processing algorithm and calibration procedures. These procedures have resulted in the production of scientifically valuable along-track accelerometer data of Swarm C for almost the entire mission timeline, Swarm A for almost two years, and Swarm B for a few months, particularly during geomagnetic active periods at the request of ad-hoc users. Special attention is given to monitoring and maintaining the satisfactory quality and validity of the accelerometers' Level 2 data. The benefits of accelerometers for deriving non-gravitational accelerations and studying the near-Earth space environment are highlighted with examples of several strong geomagnetic storms, showcasing the instrumental role of these data in advancing our understanding of space weather phenomena.
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: Implementation of the Swarm FAST Processing Pipeline

Authors: Alessandro Maltese, Livia D'Alba, Antonio De La Fuente, Danilo
Affiliations: Serco
Due to technical and budgetary constraints, the implementation of a near real-time (NRT) processing chain was initially discarded during the development of the Swarm Payload Data Ground Segment (PDGS) prior to launch. At that time, the focus was on meeting the core mission objectives within the limited resources, and incorporating an NRT processing capability was considered too ambitious and costly. However, only a few years into routine operations, it was recognized that a low-latency processing pipeline, along with optimization of the downlink strategy, could significantly extend the exploitation of Swarm data into new scientific and engineering application domains such as space weather. The growing importance of timely geomagnetic data for monitoring and forecasting space weather events highlighted the potential benefits of revisiting the initial decision. In 2021, the feasibility of a low-latency Level 1b processor and the implementation of a parallel Swarm FAST processing pipeline began to be evaluated. Given limited resources, a phased approach was adopted to manage risks and ensure efficient use of available assets. This approach included a processor feasibility analysis to assess technical requirements and potential challenges, followed by a six-month processing pilot to test the concepts in a controlled environment. The pilot phase provided valuable insights into system performance and user feedback, which were crucial for refining the processing pipeline. Following the successful pilot and strong endorsement from the scientific community at the 12th Swarm Data Quality Workshop (DQW) in October 2022, the implementation of a new robust FAST processing pipeline was initiated. The community's support underscored the demand for low-latency data and validated the project's direction. The new standalone FAST processing pipeline was implemented using an Agile and DevOps approach, facilitating iterative development and continuous improvement. This methodology allowed for rapid responses to emerging requirements and streamlined collaboration between development and operations teams. The pipeline is based on the Werum Olib framework, which provides a flexible and scalable platform and is also being used in more recent Earth Explorer missions such as EarthCARE and Biomass. Leveraging this framework ensured compatibility with existing systems. The FAST pipeline was deployed on the existing EOP-GE cloud infrastructure, utilizing cloud resources for scalability and reliability. Systematic production started at the end of April 2023, marking a significant milestone in enhancing Swarm's data capabilities. After thorough scientific validation and endorsement by the scientific community, the FAST data was made available to all users in December 2023, opening new opportunities for research and operational applications that require timely data access. Maltese A 1, de la Fuente A 2 , Shanmugam P 3 , D’Alba L 4 , Parente D 1 1 SERCO c/o ESA, Frascati, Italy 2 European Space Agency, Frascati, Italy 3 Werum, Lueneburg, Germany 4 Starion c/o ESA, Frascati, Italy
Add to Google Calendar

Tuesday 24 June 17:45 - 19:00 (X5 - Poster Area)

Poster: The Swarm Constellation - Ten Years in orbit, and beyond

Authors: Giuseppe Albini, David Patterson, Angel Fernandez Lois, Alessandro Latino, Emanuele Lovera, Giuseppe Romanelli, Filippo Inno, Anne Hartmann, Thomas Demeillers, Aybike Kolmas, Marco Bruno
Affiliations: Esa, Starion, Telespazio Germany GmbH, Serco GmbH, Solenix GmbH
Swarm is the magnetic field mission of the ESA Earth Observation program composed of three satellites flying in a semi-controlled constellation: Swarm-A and Swarm-C flying in pairs and Swarm-B at an higher altitude. Its history in-orbit began in the afternoon of the 22nd of November 2013, when the three identical spacecraft separated perfectly from the upper stage of the Rockot launcher at an altitude of about 499 km. Control of the trio was immediately taken over by the ESA’s European Space Operations Centre (ESOC) in Darmstadt, Germany. Following the successful completion of the Launch and Early Orbit Phase (LEOP), commissioning was concluded in spring 2014 and precious scientific data have been provided since then. In order to deliver extremely accurate data to advance our understanding of Earth’s magnetic field and its implications, each Swarm satellite carries a magnetic package, composed by an Absolute Scalar Magnetometer (ASM) and a Vector Field Magnetometer (VFM), an Electrical Field Instrument (EFI) and an Accelerometer (ACC). Unfortunately Swarm-C, due to a failure in LEOP and commissioning, does not carry an ASM. Two daily ground station contacts per spacecraft are needed to support operations and downlink the scientific data stored in the on-board Mass Memory Unit. The operations are run as of late 2023 with the highest level of automation implemented at ESOC, without real-time operator and with the team currently merged with the CryoSat-2 Flight Control Team. Many activities and campaigns have been performed through the years to improve instrument anomalies, such as changing the EFI operation’s concept to a limited number of daily science orbits and scrubbing operations to counteract image degradation. Similarly, in the last years the ASM instrument has undertaken more and more sessions in Burst Mode, producing data at 250Hz on request of the Instruments team. Also, this activity has been recently integrated in the automated operations concept, to offer flexibility to target this mode based on the space environment’s short-term evolution. On the platform side, a few anomalies happened and were reacted upon very quickly, e.g. the Swarm-A science data downlink anomaly in 2020, that was solved by routing all science data to the house-keeping storage and re-designing part of the ground segment’s processing to handle this change of concept. A recent undertaking has been, as of mid-2023 the support of an additional mass memory downlink concept to acquire the data sensed during the passes, in order to support the FAST processing chain and exploit some NRT capabilities of the mission. On the orbits side, several manoeuvring campaigns were undertaken in 2019 and then in 2022 and 2023: first to change the relative local time of Swarm-A and Swarm-C, such to meet Swarm-B when the orbital planes were at the closest angular location in Summer to Winter, 2021 (so-called counter-rotating orbits), then to raise the orbits of the lower pair, and then Swarm-B in an attempt to over-come the altitude drop caused by the Solar Cycle 25, whose strength and effect on the orbit is increasing with respect to the predictions of the first years of the cycle. Another challenge, on the rise since a few years is the impact of the Collision Avoidance activities on operations, with dozens of events analysed every year with an increasing trend, culminating this year with more than 60 events screened, most of which connected to encounters with active Starlink satellites, but only a few resulting in a Collision Avoidance Manoeuvre. The presentation will describe the Swarm specific ground segment elements of the FOS and explain some of the challenging operations performed so far during this 10+-years-long journey, from pay-loads operations to resolution of anomalies and the last orbital manoeuvre campaigns.
Add to Google Calendar

Tuesday 24 June 18:00 - 19:00 (Nexus Agora)

Session: F.04.31 UNEP ESA Strategic Partnership



The Agora is dedicated to the UNEP-ESA Partnership, based on the Memorandum of Understanding and continued collaborative efforts.



UNEP is addressing the so-called three planetary crises of: climate change, nature and biodiversity loss, and pollution and waste. UNEP has the mandate of setting the global environmental agenda and promoting the coherent implementation of the environmental dimension of sustainable development.



It is unique opportunity for UNEP to present their latest updates, future plans and cooperation opportunities.



The UNEP-ESA partnership aims at aligning the efforts of the two organizations with the creation of synergies and also in support to:

a) the sharing of field data sets and surveys by UNEP. These are fundamental information which are complementary to the EO data.

b) the co-development of innovative Earth Observation algorithms, products and applications relevant for the mandate of UNEP, making use of cutting-edge information technology capabilities, facilitating operational solutions.

c) the exchange of expertise to increase the sharing of knowledge between UNEP and ESA.



The Agora will have a panel discussion format with lightning talks and subsequently fostering an interactive dialogue with the audience.

Speakers:


  • Magda Biesiada
  • Melissa De Kock
  • Harald Egerer
  • Itziar Irakulis Loitxate
Add to Google Calendar